Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Quality metrics in high-dimensional data visualization: an overview and systematization.
Bertini, Enrico; Tatu, Andrada; Keim, Daniel
2011-12-01
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE
A no-reference image and video visual quality metric based on machine learning
NASA Astrophysics Data System (ADS)
Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy
2018-04-01
The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.
Degraded visual environment image/video quality metrics
NASA Astrophysics Data System (ADS)
Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.
2014-06-01
A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
Visual quality analysis for images degraded by different types of noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Ieremeyev, Oleg I.; Egiazarian, Karen O.; Astola, Jaakko T.
2013-02-01
Modern visual quality metrics take into account different peculiarities of the Human Visual System (HVS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNRHVS- M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HVS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HVS-metrics.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
PSQM-based RR and NR video quality metrics
NASA Astrophysics Data System (ADS)
Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu
2003-06-01
This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.
Memory colours and colour quality evaluation of conventional and solid-state lamps.
Smet, Kevin A G; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter
2010-12-06
A colour quality metric based on memory colours is presented. The basic idea is simple. The colour quality of a test source is evaluated as the degree of similarity between the colour appearance of a set of familiar objects and their memory colours. The closer the match, the better the colour quality. This similarity was quantified using a set of similarity distributions obtained by Smet et al. in a previous study. The metric was validated by calculating the Pearson and Spearman correlation coefficients between the metric predictions and the visual appreciation results obtained in a validation experiment conducted by the authors as well those obtained in two independent studies. The metric was found to correlate well with the visual appreciation of the lighting quality of the sources used in the three experiments. Its performance was also compared with that of the CIE colour rendering index and the NIST colour quality scale. For all three experiments, the metric was found to be significantly better at predicting the correct visual rank order of the light sources (p < 0.1).
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
Quality metrics for sensor images
NASA Technical Reports Server (NTRS)
Ahumada, AL
1993-01-01
Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai
2013-05-01
Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.
Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.
Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida
2016-06-28
During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.
Comparative Study of the MTFA, ICS, and SQRI Image Quality Metrics for Visual Display Systems
1991-09-01
reasonable image quality predictions across select display and viewing condition parameters. 101 6.0 REFERENCES American National Standard for Human Factors Engineering of ’ Visual Display Terminal Workstations . ANSI
Quality assessment for color reproduction using a blind metric
NASA Astrophysics Data System (ADS)
Bringier, B.; Quintard, L.; Larabi, M.-C.
2007-01-01
This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging
Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.
2013-01-01
Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895
Simultaneous analysis and quality assurance for diffusion tensor imaging.
Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A
2013-01-01
Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.
Integrating automated support for a software management cycle into the TAME system
NASA Technical Reports Server (NTRS)
Sunazuka, Toshihiko; Basili, Victor R.
1989-01-01
Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
The compressed average image intensity metric for stereoscopic video quality assessment
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2016-09-01
The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.
Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R
2013-07-01
The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.
Evaluation of image deblurring methods via a classification metric
NASA Astrophysics Data System (ADS)
Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo
2012-09-01
The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.
Plaza-Puche, Ana B; Alió, Jorge L; MacRae, Scott; Zheleznyak, Len; Sala, Esperanza; Yoon, Geunyoung
2015-05-01
To investigate the correlations existing between a trifocal intraocular lens (IOL) and a varifocal IOL using the "ex vivo" optical bench through-focus image quality analysis and the clinical visual performance in real patients by study of the defocus curves. This prospective, consecutive, nonrandomized, comparative study included a total of 64 eyes of 42 patients. Three groups of eyes were differentiated according to the IOL implanted: 22 eyes implanted with the varifocal Lentis Mplus LS-313 IOL (Oculentis GmbH, Berlin, Germany); 22 eyes implanted with the trifocal FineVision IOL (Physiol, Liege, Belgium), and 20 eyes implanted with the monofocal Acrysof SA60AT IOL (Alcon Laboratories, Inc., Fort Worth, TX). Visual outcomes and defocus curve were evaluated postoperatively. Optical bench through-focus performance was quantified by computing an image quality metric and the cross-correlation coefficient between an unaberrated reference image and captured retinal images from a model eye with a 3.0-mm artificial pupil. Statistically significant differences among defocus curves of different IOLs were detected for the levels of defocus from -4.00 to -1.00 diopters (D) (P < .01). Significant correlations were found between the optical bench image quality metric results and logMAR visual acuity scale in all groups (Lentis Mplus group: r = -0.97, P < .01; FineVision group: r = -0.82, P < .01; Acrys of group: r = -0.99, P < .01). Linear predicting models were obtained. Significant correlations were found between logMAR visual acuity and image quality metric for the multifocal and monofocal IOLs analyzed. This finding enables surgeons to predict visual outcomes from the optical bench analysis. Copyright 2015, SLACK Incorporated.
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
Wood, T J; Beavis, A W; Saunderson, J R
2013-01-01
Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362
On the performance of metrics to predict quality in point cloud representations
NASA Astrophysics Data System (ADS)
Alexiou, Evangelos; Ebrahimi, Touradj
2017-09-01
Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
NASA Astrophysics Data System (ADS)
Camp, H. A.; Moyer, Steven; Moore, Richard K.
2010-04-01
The Night Vision and Electronic Sensors Directorate's current time-limited search (TLS) model, which makes use of the targeting task performance (TTP) metric to describe image quality, does not explicitly account for the effects of visual clutter on observer performance. The TLS model is currently based on empirical fits to describe human performance for a time of day, spectrum and environment. Incorporating a clutter metric into the TLS model may reduce the number of these empirical fits needed. The masked target transform volume (MTTV) clutter metric has been previously presented and compared to other clutter metrics. Using real infrared imagery of rural images with varying levels of clutter, NVESD is currently evaluating the appropriateness of the MTTV metric. NVESD had twenty subject matter experts (SME) rank the amount of clutter in each scene in a series of pair-wise comparisons. MTTV metric values were calculated and then compared to the SME observers rankings. The MTTV metric ranked the clutter in a similar manner to the SME evaluation, suggesting that the MTTV metric may emulate SME response. This paper is a first step in quantifying clutter and measuring the agreement to subjective human evaluation.
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
Spread spectrum image watermarking based on perceptual quality metric.
Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi
2011-11-01
Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.
Compressing Test and Evaluation by Using Flow Data for Scalable Network Traffic Analysis
2014-10-01
test events, quality of service and other key metrics of military systems and networks are evaluated. Network data captured in standard flow formats...mentioned here. The Ozone Widget Framework (Next Century, n.d.) has proven to be very useful. Also, an extensive, clean, and optimized JavaScript ...library for visualizing many types of data can be found in D3–Data Driven Documents (Bostock, 2013). Quality of Service from Flow Two essential metrics of
Information theoretical assessment of visual communication with wavelet coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur
1995-06-01
A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.
Image quality assessment by preprocessing and full reference model combination
NASA Astrophysics Data System (ADS)
Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.
2009-01-01
This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Visual difference metric for realistic image synthesis
NASA Astrophysics Data System (ADS)
Bolin, Mark R.; Meyer, Gary W.
1999-05-01
An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
Walsh, C; Johnston, C; Sheehy, N; O' Reilly, G
2013-02-01
In this study the quantitative and qualitative image quality (IQ) measurements with clinical judgement of IQ in positron emission tomography (PET) were compared. The limitations of IQ metrics and the proposed criteria of acceptability for PET scanners are discussed. Phantom and patient images were reconstructed using seven different iterative reconstruction protocols. For each reconstructed set of images, IQ was scored based both on the visual analysis and on the quantitative metrics. The quantitative physics metrics did not rank the reconstruction protocols in the same order as the clinicians' scoring of perceived IQ (R(s)=-0.54). Better agreement was achieved when comparing the clinical perception of IQ to the physicist's visual assessment of IQ in the phantom images (R(s)=+0.59). The closest agreement was seen between the quantitative physics metrics and the measurement of the standard uptake values (SUVs) in small tumours (R(s)=+0.92). Given the disparity between the clinical perception of IQ and the physics metrics a cautious approach to use of IQ measurements for determining suspension levels is warranted.
Automatic extraction and visualization of object-oriented software design metrics
NASA Astrophysics Data System (ADS)
Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John
2000-02-01
Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
2015-05-01
Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.
Overcoming Presbyopia by Manipulating the Eyes' Optics
NASA Astrophysics Data System (ADS)
Zheleznyak, Leonard A.
Presbyopia, the age-related loss of accommodation, is a visual condition affecting all adults over the age of 45 years. In presbyopia, individuals lose the ability to focus on nearby objects, due to a lifelong growth and stiffening of the eye's crystalline lens. This leads to poor near visual performance and affects patients' quality of life. The objective of this thesis is aimed towards the correction of presbyopia and can be divided into four aims. First, we examined the characteristics and limitations of currently available strategies for the correction of presbyopia. A natural-view wavefront sensor was used to objectively measure the accommodative ability of patients implanted with an accommodative intraocular lens (IOL). Although these patients had little accommodative ability based on changes in power, pupil miosis and higher order aberrations led to an improvement in through-focus retinal image quality in some cases. To quantify the through-focus retinal image quality of accommodative and multifocal IOLs directly, an adaptive optics (AO) IOL metrology system was developed. Using this system, the impact of corneal aberrations in regard to presbyopia-correcting IOLs was assessed, providing an objective measure of through-focus retinal image quality and practical guidelines for patient selection. To improve upon existing multifocal designs, we investigated retinal image quality metrics for the prediction of through-focus visual performance. The preferred metric was based on the fidelity of an image convolved with an aberrated point spread function. Using this metric, we investigated the potential of higher order aberrations and pupil amplitude apodization to increase the depth of focus of the presbyopic eye. Thirdly, we investigated modified monovision, a novel binocular approach to presbyopia correction using a binocular AO vision simulator. In modified monovision, different magnitudes of defocus and spherical aberration are introduced to each eye, thereby taking advantage of the binocular visual system. Several experiments using the binocular AO vision simulator found modified monovision led to significant improvements in through-focus visual performance, binocular summation and stereoacuity, as compared to traditional monovision. Finally, we addressed neural factors, affecting visual performance in modified monovision, such as ocular dominance and neural plasticity. We found that pairing modified monovision with a vision training regimen may further improve visual performance beyond the limits set by optics via neural plasticity. This opens the door to an exciting new avenue of vision correction to accompany optical interventions. The research presented in this thesis offers important guidelines for the clinical and scientific communities. Furthermore, the techniques described herein may be applied to other fields of ophthalmology, such as childhood myopia progression.
NMF-Based Image Quality Assessment Using Extreme Learning Machine.
Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun
2017-01-01
Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.
Genome U-Plot: a whole genome visualization.
Gaitatzes, Athanasios; Johnson, Sarah H; Smadbeck, James B; Vasmatzis, George
2018-05-15
The ability to produce and analyze whole genome sequencing (WGS) data from samples with structural variations (SV) generated the need to visualize such abnormalities in simplified plots. Conventional two-dimensional representations of WGS data frequently use either circular or linear layouts. There are several diverse advantages regarding both these representations, but their major disadvantage is that they do not use the two-dimensional space very efficiently. We propose a layout, termed the Genome U-Plot, which spreads the chromosomes on a two-dimensional surface and essentially quadruples the spatial resolution. We present the Genome U-Plot for producing clear and intuitive graphs that allows researchers to generate novel insights and hypotheses by visualizing SVs such as deletions, amplifications, and chromoanagenesis events. The main features of the Genome U-Plot are its layered layout, its high spatial resolution and its improved aesthetic qualities. We compare conventional visualization schemas with the Genome U-Plot using visualization metrics such as number of line crossings and crossing angle resolution measures. Based on our metrics, we improve the readability of the resulting graph by at least 2-fold, making apparent important features and making it easy to identify important genomic changes. A whole genome visualization tool with high spatial resolution and improved aesthetic qualities. An implementation and documentation of the Genome U-Plot is publicly available at https://github.com/gaitat/GenomeUPlot. vasmatzis.george@mayo.edu. Supplementary data are available at Bioinformatics online.
Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.
Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel
2017-07-28
New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly inmore » high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.« less
Hastings, Gareth D.; Marsack, Jason D.; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A.
2017-01-01
Purpose To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Methods Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. Results For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ±SD was −0.06 ±0.04 with both refractions; dilated was −0.05 ±0.04 with the objective, and −0.05 ±0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. Conclusions A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred over subjective refraction. Subjective refraction was preferred by habitually undercorrected hyperopic eyes. PMID:28370389
Information theoretical assessment of visual communication with subband coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.
1994-09-01
A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
Wolffsohn, James Stuart; Jackson, Jonathan; Hunt, Olivia Anne; Cottriall, Charles; Lindsay, Jennifer; Gilmour, Richard; Sinclair, Anne; Harper, Robert
2014-01-01
AIM To develop a short, enhanced functional ability Quality of Vision (faVIQ) instrument based on previous questionnaires employing comprehensive modern statistical techniques to ensure the use of an appropriate response scale, items and scoring of the visual related difficulties experienced by patients with visual impairment. METHODS Items in current quality-of-life questionnaires for the visually impaired were refined by a multi-professional group and visually impaired focus groups. The resulting 76 items were completed by 293 visually impaired patients with stable vision on two occasions separated by a month. The faVIQ scores of 75 patients with no ocular pathology were compared to 75 age and gender matched patients with visual impairment. RESULTS Rasch analysis reduced the faVIQ items to 27. Correlation to standard visual metrics was moderate (r=0.32-0.46) and to the NEI-VFQ was 0.48. The faVIQ was able to clearly discriminate between age and gender matched populations with no ocular pathology and visual impairment with an index of 0.983 and 95% sensitivity and 95% specificity using a cut off of 29. CONCLUSION The faVIQ allows sensitive assessment of quality-of-life in the visually impaired and should support studies which evaluate the effectiveness of low vision rehabilitation services. PMID:24634868
Wolffsohn, James Stuart; Jackson, Jonathan; Hunt, Olivia Anne; Cottriall, Charles; Lindsay, Jennifer; Gilmour, Richard; Sinclair, Anne; Harper, Robert
2014-01-01
To develop a short, enhanced functional ability Quality of Vision (faVIQ) instrument based on previous questionnaires employing comprehensive modern statistical techniques to ensure the use of an appropriate response scale, items and scoring of the visual related difficulties experienced by patients with visual impairment. Items in current quality-of-life questionnaires for the visually impaired were refined by a multi-professional group and visually impaired focus groups. The resulting 76 items were completed by 293 visually impaired patients with stable vision on two occasions separated by a month. The faVIQ scores of 75 patients with no ocular pathology were compared to 75 age and gender matched patients with visual impairment. Rasch analysis reduced the faVIQ items to 27. Correlation to standard visual metrics was moderate (r=0.32-0.46) and to the NEI-VFQ was 0.48. The faVIQ was able to clearly discriminate between age and gender matched populations with no ocular pathology and visual impairment with an index of 0.983 and 95% sensitivity and 95% specificity using a cut off of 29. The faVIQ allows sensitive assessment of quality-of-life in the visually impaired and should support studies which evaluate the effectiveness of low vision rehabilitation services.
Weighted-MSE based on saliency map for assessing video quality of H.264 video streams
NASA Astrophysics Data System (ADS)
Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.
2011-01-01
Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
BatMass: a Java Software Platform for LC-MS Data Visualization in Proteomics and Metabolomics.
Avtonomov, Dmitry M; Raskind, Alexander; Nesvizhskii, Alexey I
2016-08-05
Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC-MS-based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC-MS data are often overlooked, and assessment of an experiment's success is based on some derived metrics such as "the number of identified compounds". The human brain interprets visual data much better than plain text, hence the saying "a picture is worth a thousand words". Here, we present the BatMass software package, which allows for performing quick quality control of raw LC-MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC-MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration.
BatMass: a Java software platform for LC/MS data visualization in proteomics and metabolomics
Avtonomov, Dmitry; Raskind, Alexander; Nesvizhskii, Alexey I.
2017-01-01
Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC/MS based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC/MS data are often overlooked and assessment of an experiment's success is based on some derived metrics such as “the number of identified compounds”. Human brain interprets visual data much better than plain text, hence the saying “a picture is worth a thousand words”. Here we present BatMass software package which allows to perform quick quality control of raw LC/MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC/MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration. PMID:27306858
SU-G-BRB-16: Vulnerabilities in the Gamma Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neal, B; Siebers, J
Purpose: To explore vulnerabilities in the gamma index metric that undermine its wide use as a radiation therapy quality assurance tool. Methods: 2D test field pairs (images) are created specifically to achieve high gamma passing rates, but to also include gross errors by exploiting the distance-to-agreement and percent-passing components of the metric. The first set has no requirement of clinical practicality, but is intended to expose vulnerabilities. The second set exposes clinically realistic vulnerabilities. To circumvent limitations inherent to user-specific tuning of prediction algorithms to match measurements, digital test cases are manually constructed, thereby mimicking high-quality image prediction. Results: Withmore » a 3 mm distance-to-agreement metric, changing field size by ±6 mm results in a gamma passing rate over 99%. For a uniform field, a lattice of passing points spaced 5 mm apart results in a passing rate of 100%. Exploiting the percent-passing component, a 10×10 cm{sup 2} field can have a 95% passing rate when an 8 cm{sup 2}=2.8×2.8 cm{sup 2} highly out-of-tolerance (e.g. zero dose) square is missing from the comparison image. For clinically realistic vulnerabilities, an arc plan for which a 2D image is created can have a >95% passing rate solely due to agreement in the lateral spillage, with the failing 5% in the critical target region. A field with an integrated boost (e.g whole brain plus small metastases) could neglect the metastases entirely, yet still pass with a 95% threshold. All the failure modes described would be visually apparent on a gamma-map image. Conclusion: The %gamma<1 metric has significant vulnerabilities. High passing rates can obscure critical faults in hypothetical and delivered radiation doses. Great caution should be used with gamma as a QA metric; users should inspect the gamma-map. Visual analysis of gamma-maps may be impractical for cine acquisition.« less
Hastings, Gareth D; Marsack, Jason D; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A
2017-05-01
To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ± S.D. was -0.06 ± 0.04 with both refractions; dilated was -0.05 ± 0.04 with the objective, and -0.05 ± 0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred over subjective refraction. Subjective refraction was preferred by habitually undercorrected hyperopic eyes. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
NASA Astrophysics Data System (ADS)
Sato, Takashi; Honma, Michio; Itoh, Hiroyuki; Iriki, Nobuyuki; Kobayashi, Sachiko; Miyazaki, Norihiko; Onodera, Toshio; Suzuki, Hiroyuki; Yoshioka, Nobuyuki; Arima, Sumika; Kadota, Kazuya
2009-04-01
The category and objective of DFM production management are shown. DFM is not limited to an activity within a particular unit process in design and process. A new framework for DFM is required. DFM should be a total solution for the common problems of all processes. Each of them must be linked to one another organically. After passing through the whole of each process on the manufacturing platform, quality of final products is guaranteed and products are shipped to the market. The information platform is layered with DFM, APC, and AEC. Advanced DFM is not DFM for partial optimization of the lithography process and the design, etc. and it should be Organized DFM. They are managed with high-level organizational IQ. The interim quality between each step of the flow should be visualized. DFM will be quality engineering if it is Organized DFM and common metrics of the quality are provided. DFM becomes quality engineering through effective implementation of common industrial metrics and standardized technology. DFM is differential technology, but can leverage standards for efficient development.
Roalf, David R.; Quarmley, Megan; Elliott, Mark A.; Satterthwaite, Theodore D.; Vandekar, Simon N.; Ruparel, Kosha; Gennatas, Efstathios D.; Calkins, Monica E.; Moore, Tyler M.; Hopson, Ryan; Prabhakaran, Karthik; Jackson, Chad T.; Verma, Ragini; Hakonarson, Hakon; Gur, Ruben C.; Gur, Raquel E.
2015-01-01
Background Diffusion tensor imaging (DTI) is applied in investigation of brain biomarkers for neurodevelopmental and neurodegenerative disorders. However, the quality of DTI measurements, like other neuroimaging techniques, is susceptible to several confounding factors (e.g. motion, eddy currents), which have only recently come under scrutiny. These confounds are especially relevant in adolescent samples where data quality may be compromised in ways that confound interpretation of maturation parameters. The current study aims to leverage DTI data from the Philadelphia Neurodevelopmental Cohort (PNC), a sample of 1,601 youths ages of 8–21 who underwent neuroimaging, to: 1) establish quality assurance (QA) metrics for the automatic identification of poor DTI image quality; 2) examine the performance of these QA measures in an external validation sample; 3) document the influence of data quality on developmental patterns of typical DTI metrics. Methods All diffusion-weighted images were acquired on the same scanner. Visual QA was performed on all subjects completing DTI; images were manually categorized as Poor, Good, or Excellent. Four image quality metrics were automatically computed and used to predict manual QA status: Mean voxel intensity outlier count (MEANVOX), Maximum voxel intensity outlier count (MAXVOX), mean relative motion (MOTION) and temporal signal-to-noise ratio (TSNR). Classification accuracy for each metric was calculated as the area under the receiver-operating characteristic curve (AUC). A threshold was generated for each measure that best differentiated visual QA status and applied in a validation sample. The effects of data quality on sensitivity to expected age effects in this developmental sample were then investigated using the traditional MRI diffusion metrics: fractional anisotropy (FA) and mean diffusivity (MD). Finally, our method of QA is compared to DTIPrep. Results TSNR (AUC=0.94) best differentiated Poor data from Good and Excellent data. MAXVOX (AUC=0.88) best differentiated Good from Excellent DTI data. At the optimal threshold, 88% of Poor data and 91% Good/Excellent data were correctly identified. Use of these thresholds on a validation dataset (n=374) indicated high accuracy. In the validation sample 83% of Poor data and 94% of Excellent data was identified using thresholds derived from the training sample. Both FA and MD were affected by the inclusion of poor data in an analysis of age, sex and race in a matched comparison sample. In addition, we show that the inclusion of poor data results in significant attenuation of the correlation between diffusion metrics (FA and MD) and age during a critical neurodevelopmental period. We find higher correspondence between our QA method and DTIPrep for Poor data, but we find our method to be more robust for apparently high-quality images. Conclusion Automated QA of DTI can facilitate large-scale, high-throughput quality assurance by reliably identifying both scanner and subject induced imaging artifacts. The results present a practical example of the confounding effects of artifacts on DTI analysis in a large population-based sample, and suggest that estimates of data quality should not only be reported but also accounted for in data analysis, especially in studies of development. PMID:26520775
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, J; Christianson, O; Samei, E
Purpose: Flood-field uniformity evaluation is an essential element in the assessment of nuclear medicine (NM) gamma cameras. It serves as the central element of the quality control (QC) program, acquired and analyzed on a daily basis prior to clinical imaging. Uniformity images are traditionally analyzed using pixel value-based metrics which often fail to capture subtle structure and patterns caused by changes in gamma camera performance requiring additional visual inspection which is subjective and time demanding. The goal of this project was to develop and implement a robust QC metrology for NM that is effective in identifying non-uniformity issues, reporting issuesmore » in a timely manner for efficient correction prior to clinical involvement, all incorporated into an automated effortless workflow, and to characterize the program over a two year period. Methods: A new quantitative uniformity analysis metric was developed based on 2D noise power spectrum metrology and confirmed based on expert observer visual analysis. The metric, termed Structured Noise Index (SNI) was then integrated into an automated program to analyze, archive, and report on daily NM QC uniformity images. The effectiveness of the program was evaluated over a period of 2 years. Results: The SNI metric successfully identified visually apparent non-uniformities overlooked by the pixel valuebased analysis methods. Implementation of the program has resulted in nonuniformity identification in about 12% of daily flood images. In addition, due to the vigilance of staff response, the percentage of days exceeding trigger value shows a decline over time. Conclusion: The SNI provides a robust quantification of the NM performance of gamma camera uniformity. It operates seamlessly across a fleet of multiple camera models. The automated process provides effective workflow within the NM spectra between physicist, technologist, and clinical engineer. The reliability of this process has made it the preferred platform for NM uniformity analysis.« less
Effect of Pupil Size on Wavefront Refraction during Orthokeratology.
Faria-Ribeiro, Miguel; Navarro, Rafael; González-Méijome, José Manuel
2016-11-01
It has been hypothesized that central and peripheral refraction, in eyes treated with myopic overnight orthokeratology, might vary with changes in pupil diameter. The aim of this work was to evaluate the axial and peripheral refraction and optical quality after orthokeratology, using ray tracing software for different pupil sizes. Zemax-EE was used to generate a series of 29 semi-customized model eyes based on the corneal topography changes from 29 patients who had undergone myopic orthokeratology. Wavefront refraction in the central 80 degrees of the visual field was calculated using three different quality metrics criteria: Paraxial curvature matching, minimum root mean square error (minRMS), and the Through Focus Visual Strehl of the Modulation Transfer Function (VSMTF), for 3- and 6-mm pupil diameters. The three metrics predicted significantly different values for foveal and peripheral refractions. Compared with the Paraxial criteria, the other two metrics predicted more myopic refractions on- and off-axis. Interestingly, the VSMTF predicts only a marginal myopic shift in the axial refraction as the pupil changes from 3 to 6 mm. For peripheral refraction, minRMS and VSMTF metric criteria predicted a higher exposure to peripheral defocus as the pupil increases from 3 to 6 mm. The results suggest that the supposed effect of myopic control produced by ortho-k treatments might be dependent on pupil size. Although the foveal refractive error does not seem to change appreciably with the increase in pupil diameter (VSMTF criteria), the high levels of positive spherical aberration will lead to a degradation of lower spatial frequencies, that is more significant under low illumination levels.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Visualisation of uncertainty for the trade-off triangle used in sustainable agriculture
NASA Astrophysics Data System (ADS)
Harris, Paul; Takahashi, Taro; Lee, Michael
2017-04-01
Agriculture at the global-scale is at a critical juncture where competing requirements for maximal production and minimal pollution have led to the concept of sustainable intensification. All farming systems (arable, grasslands, etc.) are part of this debate, where each have particular associated environmental risks such as water and air pollution, greenhouse gas emissions and soil degradation, as well as issues affecting production efficiency, product quality and consumer acceptability, reflected in the development of agricultural sustainability policies. These challenges necessitate multidisciplinary solutions that can only be properly researched, implemented and tested in real-world production systems which are suited to their geographical and climatic production practice. In this respect, various high-profile agricultural data collection experiments have been set up, such as the North Wyke Farm Platform (http://www.rothamsted.ac.uk/farmplatform) to research agricultural productivity and ecosystem responses to different management practices. In this farm-scale grasslands experiment, data on hydrology, emissions, nutrient cycling, biodiversity, productivity and livestock welfare/health are collected, that in turn, are converted to trade-off metrics with respect to: (i) economic profits, (ii) societal benefits and (iii) environmental concerns, under the umbrella of sustainable intensification. Similar agriculture research platforms have similar objectives, where data collections are ultimately synthesised into trade-off metrics. Trade-offs metrics can then be usefully visualized via the usual sustainable triangle, with a new triangle for each key time period (e.g. baseline versus post-baseline). This enables a visual assessment of change in sustainability harmony or discord, according to the remit of the given research experiment. In this paper, we discuss different approaches to calculation of the sustainability trade-off metrics that are required from the farm platform data collections. Then via simulated trade-off metrics, rather than the actual trade-off metrics from the farm platform, we present novel visualizations of the sustainability triangle demonstrating ways to separate uncertainties related to agricultural production (e.g. soil and animal/crop heterogeneity) from uncertainties related to data collection (e.g. measurement errors). The visualizations are general and can be applied to any agricultural data collection experiment that intends to use sustainability triangles to relay trade-offs. We also consider how these visualizations can be honed to suit different audiences.
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
Perceptual color difference metric including a CSF based on the perception threshold
NASA Astrophysics Data System (ADS)
Rosselli, Vincent; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine
2008-01-01
The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test stimulus with that of a reference one. The obtained results are quite different in comparison with the standard approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been integrated in a perceptual color difference metric inspired by the s-CIELAB. The metric is then evaluated with both objective and subjective procedures.
MUSTANG: A Community-Facing Web Service to Improve Seismic Data Quality Awareness Through Metrics
NASA Astrophysics Data System (ADS)
Templeton, M. E.; Ahern, T. K.; Casey, R. E.; Sharer, G.; Weertman, B.; Ashmore, S.
2014-12-01
IRIS DMC is engaged in a new effort to provide broad and deep visibility into the quality of data and metadata found in its terabyte-scale geophysical data archive. Taking advantage of large and fast disk capacity, modern advances in open database technologies, and nimble provisioning of virtual machine resources, we are creating an openly accessible treasure trove of data measurements for scientists and the general public to utilize in providing new insights into the quality of this data. We have branded this statistical gathering system MUSTANG, and have constructed it as a component of the web services suite that IRIS DMC offers. MUSTANG measures over forty data metrics addressing issues with archive status, data statistics and continuity, signal anomalies, noise analysis, metadata checks, and station state of health. These metrics could potentially be used both by network operators to diagnose station problems and by data users to sort suitable data from unreliable or unusable data. Our poster details what MUSTANG is, how users can access it, what measurements they can find, and how MUSTANG fits into the IRIS DMC's data access ecosystem. Progress in data processing, approaches to data visualization, and case studies of MUSTANG's use for quality assurance will be presented. We want to illustrate what is possible with data quality assurance, the need for data quality assurance, and how the seismic community will benefit from this freely available analytics service.
Developments in Seismic Data Quality Assessment Using MUSTANG at the IRIS DMC
NASA Astrophysics Data System (ADS)
Sharer, G.; Keyson, L.; Templeton, M. E.; Weertman, B.; Smith, K.; Sweet, J. R.; Tape, C.; Casey, R. E.; Ahern, T.
2017-12-01
MUSTANG is the automated data quality metrics system at the IRIS Data Management Center (DMC), designed to help characterize data and metadata "goodness" across the IRIS data archive, which holds 450 TB of seismic and related earth science data spanning the past 40 years. It calculates 46 metrics ranging from sample statistics and miniSEED state-of-health flag counts to Power Spectral Densities (PSDs) and Probability Density Functions (PDFs). These quality measurements are easily and efficiently accessible to users through the use of web services, which allows users to make requests not only by station and time period but also to filter the results according to metric values that match a user's data requirements. Results are returned in a variety of formats, including XML, JSON, CSV, and text. In the case of PSDs and PDFs, results can also be retrieved as plot images. In addition, there are several user-friendly client tools available for exploring and visualizing MUSTANG metrics: LASSO, MUSTANG Databrowser, and MUSTANGular. Over the past year we have made significant improvements to MUSTANG. We have nearly complete coverage over our archive for broadband channels with sample rates of 20-200 sps. With this milestone achieved, we are now expanding to include higher sample rate, short-period, and strong-motion channels. Data availability metrics will soon be calculated when a request is made which guarantees that the information reflects the current state of the archive and also allows for more flexibility in content. For example, MUSTANG will be able to return a count of gaps for any arbitrary time period instead of being limited to 24 hour spans. We are also promoting the use of data quality metrics beyond the IRIS archive through our recent release of ISPAQ, a Python command-line application that calculates MUSTANG-style metrics for users' local miniSEED files or for any miniSEED data accessible through FDSN-compliant web services. Finally, we will explore how researchers are using MUSTANG in real-world situations to select data, improve station data quality, anticipate station outages and servicing, and characterize site noise and environmental conditions.
Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.
Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis
2014-04-01
Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Unbiased Estimation of Refractive State of Aberrated Eyes
Martin, Jesson; Vasudevan, Balamurali; Himebaugh, Nikole; Bradley, Arthur; Thibos, Larry
2011-01-01
To identify unbiased methods for estimating the target vergence required to maximize visual acuity based on wavefront aberration measurements. Experiments were designed to minimize the impact of confounding factors that have hampered previous research. Objective wavefront refractions and subjective acuity refractions were obtained for the same monochromatic wavelength. Accommodation and pupil fluctuations were eliminated by cycloplegia. Unbiased subjective refractions that maximize visual acuity for high contrast letters were performed with a computer controlled forced choice staircase procedure, using 0.125 diopter steps of defocus. All experiments were performed for two pupil diameters (3mm and 6mm). As reported in the literature, subjective refractive error does not change appreciably when the pupil dilates. For 3 mm pupils most metrics yielded objective refractions that were about 0.1D more hyperopic than subjective acuity refractions. When pupil diameter increased to 6 mm, this bias changed in the myopic direction and the variability between metrics also increased. These inaccuracies were small compared to the precision of the measurements, which implies that most metrics provided unbiased estimates of refractive state for medium and large pupils. A variety of image quality metrics may be used to determine ocular refractive state for monochromatic (635nm) light, thereby achieving accurate results without the need for empirical correction factors. PMID:21777601
No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.
Li, Xuelong; Guo, Qun; Lu, Xiaoqiang
2016-05-13
It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.
Young, Laura K; Love, Gordon D; Smithson, Hannah E
2013-09-20
Advances in ophthalmic instrumentation have allowed high order aberrations to be measured in vivo. These measurements describe the distortions to a plane wavefront entering the eye, but not the effect they have on visual performance. One metric for predicting visual performance from a wavefront measurement uses the visual Strehl ratio, calculated in the optical transfer function (OTF) domain (VSOTF) (Thibos et al., 2004). We considered how well such a metric captures empirical measurements of the effects of defocus, coma and secondary astigmatism on letter identification and on reading. We show that predictions using the visual Strehl ratio can be significantly improved by weighting the OTF by the spatial frequency band that mediates letter identification and further improved by considering the orientation of phase and contrast changes imposed by the aberration. We additionally showed that these altered metrics compare well to a cross-correlation-based metric. We suggest a version of the visual Strehl ratio, VScombined, that incorporates primarily those phase disruptions and contrast changes that have been shown independently to affect object recognition processes. This metric compared well to VSOTF for letter identification and was the best predictor of reading performance, having a higher correlation with the data than either the VSOTF or cross-correlation-based metric. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Water Network Tool for Resilience v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-12-09
WNTR is a python package designed to simulate and analyze resilience of water distribution networks. The software includes: - Pressure driven and demand driven hydraulic simulation - Water quality simulation to track concentration, trace, and water age - Conditional controls to simulate power outages - Models to simulate pipe breaks - A wide range of resilience metrics - Analysis and visualization tools
Using spatial metrics to predict scenic perception in a changing landscape: Dennis, Massachusetts
James F. Palmer
2004-01-01
This paper investigates residents' perceptions of scenic quality in the Cape Cod community of Dennis, Massachusetts during a period of significant landscape change. In the mid-1970s, Chandler [Natural and Visual Resources, Dennis, Massachusetts. Dennis Conservation Commission and Planning Board, Dennis, MA, 1976] worked with a community group to evaluate the...
NASA Astrophysics Data System (ADS)
Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin
2017-07-01
This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.
NASA Astrophysics Data System (ADS)
Jonsson, Rickard M.
2005-03-01
I present a way to visualize the concept of curved spacetime. The result is a curved surface with local coordinate systems (Minkowski systems) living on it, giving the local directions of space and time. Relative to these systems, special relativity holds. The method can be used to visualize gravitational time dilation, the horizon of black holes, and cosmological models. The idea underlying the illustrations is first to specify a field of timelike four-velocities uμ. Then, at every point, one performs a coordinate transformation to a local Minkowski system comoving with the given four-velocity. In the local system, the sign of the spatial part of the metric is flipped to create a new metric of Euclidean signature. The new positive definite metric, called the absolute metric, can be covariantly related to the original Lorentzian metric. For the special case of a two-dimensional original metric, the absolute metric may be embedded in three-dimensional Euclidean space as a curved surface.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Retinal Image Quality During Accommodation
López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.
2013-01-01
Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386
Retinal image quality during accommodation.
López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N
2013-07-01
We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
A Perceptually Weighted Rank Correlation Indicator for Objective Image Quality Assessment
NASA Astrophysics Data System (ADS)
Wu, Qingbo; Li, Hongliang; Meng, Fanman; Ngan, King N.
2018-05-01
In the field of objective image quality assessment (IQA), the Spearman's $\\rho$ and Kendall's $\\tau$ are two most popular rank correlation indicators, which straightforwardly assign uniform weight to all quality levels and assume each pair of images are sortable. They are successful for measuring the average accuracy of an IQA metric in ranking multiple processed images. However, two important perceptual properties are ignored by them as well. Firstly, the sorting accuracy (SA) of high quality images are usually more important than the poor quality ones in many real world applications, where only the top-ranked images would be pushed to the users. Secondly, due to the subjective uncertainty in making judgement, two perceptually similar images are usually hardly sortable, whose ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, we explore a perceptually weighted rank correlation indicator in this paper, which rewards the capability of correctly ranking high quality images, and suppresses the attention towards insensitive rank mistakes. More specifically, we focus on activating `valid' pairwise comparison towards image quality, whose difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned an unique weight, which is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient. The proposed indicator offers a new insight for interpreting visual perception behaviors. Furthermore, the applicability of our indicator is validated in recommending robust IQA metrics for both the degraded and enhanced image data.
Quality issues in blue noise halftoning
NASA Astrophysics Data System (ADS)
Yu, Qing; Parker, Kevin J.
1998-01-01
The blue noise mask (BNM) is a halftone screen that produces unstructured visually pleasing dot patterns. The BNM combines the blue-noise characteristics of error diffusion and the simplicity of ordered dither. A BNM is constructed by designing a set of interdependent binary patterns for individual gray levels. In this paper, we investigate the quality issues in blue-noise binary pattern design and mask generation as well as in application to color reproduction. Using a global filtering technique and a local 'force' process for rearranging black and white pixels, we are able to generate a series of binary patterns, all representing a certain gray level, ranging from white-noise pattern to highly structured pattern. The quality of these individual patterns are studied in terms of low-frequency structure and graininess. Typically, the low-frequency structure (LF) is identified with a measurement of the energy around dc in the spatial frequency domain, while the graininess is quantified by a measurement of the average minimum distance (AMD) between minority dots as well as the kurtosis of the local kurtosis distribution (KLK) for minority pixels of the binary pattern. A set of partial BNMs are generated by using the different patterns as unique starting 'seeds.' In this way, we are able to study the quality of binary patterns over a range of gray levels. We observe that the optimality of a binary pattern for mask generation is related to its own quality mertirc values as well as the transition smoothness of those quality metric values over neighboring levels. Several schemes have been developed to apply blue-noise halftoning to color reproduction. Different schemes generate halftone patterns with different textures. In a previous paper, a human visual system (HVS) model was used to study the color halftone quality in terms of luminance and chrominance error in CIELAB color space. In this paper, a new series of psycho-visual experiments address the 'preferred' color rendering among four different blue noise halftoning schemes. The experimental results will be interpreted with respect to the proposed halftone quality metrics.
Rudmik, Luke; Mattos, Jose; Schneider, John; Manes, Peter R; Stokken, Janalee K; Lee, Jivianne; Higgins, Thomas S; Schlosser, Rodney J; Reh, Douglas D; Setzen, Michael; Soler, Zachary M
2017-09-01
Measuring quality outcomes is an important prerequisite to improve quality of care. Rhinosinusitis represents a high value target to improve quality of care because it has a high prevalence of disease, large economic burden, and large practice variation. In this study we review the current state of quality measurement for management of both acute (ARS) and chronic rhinosinusitis (CRS). The major national quality metric repositories and clearinghouses were queried. Additional searches included the American Academy of Otolaryngology-Head and Neck Surgery database, PubMed, and Google to attempt to capture any additional quality metrics. Seven quality metrics for ARS and 4 quality metrics for CRS were identified. ARS metrics focused on appropriateness of diagnosis (n = 1), antibiotic prescribing (n = 4), and radiologic imaging (n = 2). CRS quality metrics focused on appropriateness of diagnosis (n = 1), radiologic imaging (n = 1), and measurement of patient quality of life (n = 2). The Physician Quality Reporting System (PQRS) currently tracks 3 ARS quality metrics and 1 CRS quality metric. There are no outcome-based rhinosinusitis quality metrics and no metrics that assess domains of safety, patient-centeredness, and timeliness of care. The current status of quality measurement for rhinosinusitis has focused primarily on the quality domain of efficiency and process measures for ARS. More work is needed to develop, validate, and track outcome-based quality metrics along with CRS-specific metrics. Although there has been excellent work done to improve quality measurement for rhinosinusitis, there remain major gaps and challenges that need to be considered during the development of future metrics. © 2017 ARS-AAOA, LLC.
Applying Semantics in Dataset Summarization for Solar Data Ingest Pipelines
NASA Astrophysics Data System (ADS)
Michaelis, J.; McGuinness, D. L.; Zednik, S.; West, P.; Fox, P. A.
2012-12-01
One goal in studying phenomena of the solar corona (e.g., flares, coronal mass ejections) is to create and refine predictive models of space weather - which have broad implications for terrestrial activity (e.g., communication grid reliability). The High Altitude Observatory (HAO) [1] presently maintains an infrastructure for generating time-series visualizations of the solar corona. Through raw data gathered at the Mauna Loa Solar Observatory (MLSO) in Hawaii, HAO performs follow-up processing and quality control steps to derive visualization sets consumable by scientists. Individual visualizations will acquire several properties during their derivation, including: (i) the source instrument at MLSO used to obtain the raw data, (ii) the time the data was gathered, (iii) processing steps applied by HAO to generate the visualization, and (iv) quality metrics applied over both the raw and processed data. In parallel to MLSO's standard data gathering, time stamped observation logs are maintained by MLSO staff, which covers content of potential relevance to data gathered (such as local weather and instrument conditions). In this setting, while a significant amount of solar data is gathered, only small sections will typically be of interest to consuming parties. Additionally, direct presentation of solar data collections could overwhelm consumers (particularly those with limited background in the data structuring). This work explores how multidimensional analysis based navigation can be used to generate summary views of data collections, based on two operations: (i) grouping visualization entries based on similarity metrics (e.g., data gathered between 23:15-23:30 6-21-2012), or (ii) filtering entries (e.g., data with a quality score of UGLY, on a scale of GOOD, BAD, or UGLY). Here, semantic encodings of solar visualization collections (based on the Resource Description Framework (RDF) Datacube vocabulary [2]) are being utilized, based on the flexibility of the RDF model for supporting the following use cases: (i) Temporal alignment of time-stamped MLSO observations with raw data gathered at MLSO. (ii) Linking of multiple visualization entries to common (and structurally complex) workflow structures - designed to capture the visualization generation process. To provide real-world use cases for the described approach, a semantic summarization system is being developed for data gathered from HAO's Coronal Multi-channel Polarimeter (CoMP) and Chromospheric Helium-I Imaging Photometer (CHIP) pipelines. Web Links: [1] http://mlso.hao.ucar.edu/ [2] http://www.w3.org/TR/vocab-data-cube/
NASA Astrophysics Data System (ADS)
Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter
2002-06-01
In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.
Perceived crosstalk assessment on patterned retarder 3D display
NASA Astrophysics Data System (ADS)
Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian
2014-03-01
CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention is one of important factors for crosstalk assessment due to the fact that when viewing 3D contents, perceptual salient regions are highly likely to be a major contributor to determining the quality of experience of 3D contents. To take this into account, perceptual significant regions are extracted, and a spatial pooling technique is used to combine structural distortion map, depth map and visual salience map together to predict the perceived crosstalk more precisely. To verify the performance of the proposed crosstalk assessment metric, subjective experiments are conducted with 24 participants viewing and rating 60 simuli (5 scenes * 4 crosstalk levels * 3 camera distances). After an outliers removal and statistical process, the correlation with subjective test is examined using Pearson and Spearman rank-order correlation coefficient. Furthermore, the proposed method is also compared with two traditional 2D metrics, PSNR and SSIM. The objective score is mapped to subjective scale using a nonlinear fitting function to directly evaluate the performance of the metric. RESULIS: After the above-mentioned processes, the evaluation results demonstrate that the proposed metric is highly correlated with the subjective score when compared with the existing approaches. Because the Pearson coefficient of the proposed metric is 90.3%, it is promising for objective evaluation of the perceived crosstalk. NOVELTY: The main goal of our paper is to introduce an objective metric for stereo crosstalk assessment. The novelty contributions are twofold. First, an appropriate simulation of crosstalk by considering the characteristics of patterned retarder 3D display is developed. Second, an objective crosstalk metric based on visual attention model is introduced.
Implementation of statistical process control for proteomic experiments via LC MS/MS.
Bereman, Michael S; Johnson, Richard; Bollinger, James; Boss, Yuval; Shulman, Nick; MacLean, Brendan; Hoofnagle, Andrew N; MacCoss, Michael J
2014-04-01
Statistical process control (SPC) is a robust set of tools that aids in the visualization, detection, and identification of assignable causes of variation in any process that creates products, services, or information. A tool has been developed termed Statistical Process Control in Proteomics (SProCoP) which implements aspects of SPC (e.g., control charts and Pareto analysis) into the Skyline proteomics software. It monitors five quality control metrics in a shotgun or targeted proteomic workflow. None of these metrics require peptide identification. The source code, written in the R statistical language, runs directly from the Skyline interface, which supports the use of raw data files from several of the mass spectrometry vendors. It provides real time evaluation of the chromatographic performance (e.g., retention time reproducibility, peak asymmetry, and resolution), and mass spectrometric performance (targeted peptide ion intensity and mass measurement accuracy for high resolving power instruments) via control charts. Thresholds are experiment- and instrument-specific and are determined empirically from user-defined quality control standards that enable the separation of random noise and systematic error. Finally, Pareto analysis provides a summary of performance metrics and guides the user to metrics with high variance. The utility of these charts to evaluate proteomic experiments is illustrated in two case studies.
Frick, Kevin D; Drye, Lea T; Kempen, John H; Dunn, James P; Holland, Gary N; Latkany, Paul; Rao, Narsing A; Sen, H Nida; Sugar, Elizabeth A; Thorne, Jennifer E; Wang, Robert C; Holbrook, Janet T
2012-03-01
To evaluate the associations between visual acuity and self-reported visual function; visual acuity and health-related quality of life (QoL) metrics; a summary measure of self-reported visual function and health-related QoL; and individual domains of self-reported visual function and health-related QoL in patients with uveitis. Best-corrected visual acuity, vision-related functioning as assessed by the NEI VFQ-25, and health-related QoL as assessed by the SF-36 and EuroQoL EQ-5D questionnaires were obtained at enrollment in a clinical trial of uveitis treatments. Multivariate regression and Spearman correlations were used to evaluate associations between visual acuity, vision-related function, and health-related QoL. Among the 255 patients, median visual acuity in the better-seeing eyes was 20/25, the vision-related function score indicated impairment (median, 60), and health-related QoL scores were within the normal population range. Better visual acuity was predictive of higher visual function scores (P ≤ 0.001), a higher SF-36 physical component score, and a higher EQ-5D health utility score (P < 0.001). The vision-specific function score was predictive of all general health-related QoL (P < 0.001). The correlations between visual function score and general quality of life measures were moderate (ρ = 0.29-0.52). The vision-related function score correlated positively with visual acuity and moderately positively with general QoL measures. Cost-utility analyses relying on changes in generic healthy utility measures will be more likely to detect changes when there are clinically meaningful changes in vision-related function, rather than when there are only changes in visual acuity. (ClinicalTrials.gov number, NCT00132691.).
Zarbo, Richard J; Varney, Ruan C; Copeland, Jacqueline R; D'Angelo, Rita; Sharma, Gaurav
2015-07-01
To support our Lean culture of continuous improvement, we implemented a daily management system designed so critical metrics of operational success were the focus of local teams to drive improvements. We innovated a standardized visual daily management board composed of metric categories of Quality, Time, Inventory, Productivity, and Safety (QTIPS); frequency trending; root cause analysis; corrective/preventive actions; and resulting process improvements. In 1 year (June 2013 to July 2014), eight laboratory sections at Henry Ford Hospital employed 64 unique daily metrics. Most assessed long-term (>6 months), monitored process stability, while short-term metrics (1-6 months) were retired after successful targeted problem resolution. Daily monitoring resulted in 42 process improvements. Daily management is the key business accountability subsystem that enabled our culture of continuous improvement to function more efficiently at the managerial level in a visible manner by reviewing and acting based on data and root cause analysis. Copyright© by the American Society for Clinical Pathology.
NASA Astrophysics Data System (ADS)
Anderson, Monica; David, Phillip
2007-04-01
Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.
Effken, Judith A.; Carley, Kathleen M.; Gephart, Sheila; Verran, Joyce A.; Bianchi, Denise; Reminga, Jeff; Brewer, Barbara
2011-01-01
Purpose We used Organization Risk Analyzer (ORA), a dynamic network analysis tool, to identify patient care unit communication patterns associated with patient safety and quality outcomes. Although ORA had previously had limited use in healthcare, we felt it could effectively model communication on patient care units. Methods Using a survey methodology, we collected communication network data from nursing staff on seven patient care units on two different days. Patient outcome data were collected via a separate survey. Results of the staff survey were used to represent the communication networks for each unit in ORA. We then used ORA's analysis capability to generate communication metrics for each unit. ORA's visualization capability was used to better understand the metrics. Results We identified communication patterns that correlated with two safety (falls and medication errors) and five quality (e.g., symptom management, complex self care, and patient satisfaction) outcome measures. Communication patterns differed substantially by shift. Conclusion The results demonstrate the utility of ORA for healthcare research and the relationship of nursing unit communication patterns to patient safety and quality outcomes. PMID:21536492
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Gershengorn, Hayley B; Kocher, Robert; Factor, Phillip
2014-03-01
The success of quality-improvement projects relies heavily on both project design and the metrics chosen to assess change. In Part II of this three-part American Thoracic Society Seminars series, we begin by describing methods for determining which data to collect, tools for data presentation, and strategies for data dissemination. As Avedis Donabedian detailed a half century ago, defining metrics in healthcare can be challenging; algorithmic determination of the best type of metric (outcome, process, or structure) can help intensive care unit (ICU) managers begin this process. Choosing appropriate graphical data displays (e.g., run charts) can prompt discussions about and promote quality improvement. Similarly, dashboards/scorecards are useful in presenting performance improvement data either publicly or privately in a visually appealing manner. To have compelling data to show, ICU managers must plan quality-improvement projects well. The second portion of this review details four quality-improvement tools-checklists, Six Sigma methodology, lean thinking, and Kaizen. Checklists have become commonplace in many ICUs to improve care quality; thinking about how to maximize their effectiveness is now of prime importance. Six Sigma methodology, lean thinking, and Kaizen are techniques that use multidisciplinary teams to organize thinking about process improvement, formalize change strategies, actualize initiatives, and measure progress. None originated within healthcare, but each has been used in the hospital environment with success. To conclude this part of the series, we demonstrate how to use these tools through an example of improving the timely administration of antibiotics to patients with sepsis.
NASA Astrophysics Data System (ADS)
Moore, C. S.; Wood, T. J.; Saunderson, J. R.; Beavis, A. W.
2015-12-01
This work assessed the appropriateness of the signal-to-noise ratio improvement factor (KSNR) as a metric for the optimisation of computed radiography (CR) of the chest. The results of a previous study in which four experienced image evaluators graded computer simulated chest images using a visual grading analysis scoring (VGAS) scheme to quantify the benefit of using an anti-scatter grid were used for the clinical image quality measurement (number of simulated patients = 80). The KSNR was used to calculate the improvement in physical image quality measured in a physical chest phantom. KSNR correlation with VGAS was assessed as a function of chest region (lung, spine and diaphragm/retrodiaphragm), and as a function of x-ray tube voltage in a given chest region. The correlation of the latter was determined by the Pearson correlation coefficient. VGAS and KSNR image quality metrics demonstrated no correlation in the lung region but did show correlation in the spine and diaphragm/retrodiaphragmatic regions. However, there was no correlation as a function of tube voltage in any region; a Pearson correlation coefficient (R) of -0.93 (p = 0.015) was found for lung, a coefficient (R) of -0.95 (p = 0.46) was found for spine, and a coefficient (R) of -0.85 (p = 0.015) was found for diaphragm. All demonstrate strong negative correlations indicating conflicting results, i.e. KSNR increases with tube voltage but VGAS decreases. Medical physicists should use the KSNR metric with caution when assessing any potential improvement in clinical chest image quality when introducing an anti-scatter grid for CR imaging, especially in the lung region. This metric may also be a limited descriptor of clinical chest image quality as a function of tube voltage when a grid is used routinely.
Quality Metrics in Neonatal and Pediatric Critical Care Transport: A National Delphi Project.
Schwartz, Hamilton P; Bigham, Michael T; Schoettker, Pamela J; Meyer, Keith; Trautman, Michael S; Insoft, Robert M
2015-10-01
The transport of neonatal and pediatric patients to tertiary care facilities for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. In 2011, pediatric transport teams in Ohio met to identify quality indicators permitting comparisons among programs. However, no set of national consensus quality metrics exists for benchmarking transport teams. The aim of this project was to achieve national consensus on appropriate neonatal and pediatric transport quality metrics. Modified Delphi technique. The first round of consensus determination was via electronic mail survey, followed by rounds of consensus determination in-person at the American Academy of Pediatrics Section on Transport Medicine's 2012 Quality Metrics Summit. All attendees of the American Academy of Pediatrics Section on Transport Medicine Quality Metrics Summit, conducted on October 21-23, 2012, in New Orleans, LA, were eligible to participate. Candidate quality metrics were identified through literature review and those metrics currently tracked by participating programs. Participants were asked in a series of rounds to identify "very important" quality metrics for transport. It was determined a priori that consensus on a metric's importance was achieved when at least 70% of respondents were in agreement. This is consistent with other Delphi studies. Eighty-two candidate metrics were considered initially. Ultimately, 12 metrics achieved consensus as "very important" to transport. These include metrics related to airway management, team mobilization time, patient and crew injuries, and adverse patient care events. Definitions were assigned to the 12 metrics to facilitate uniform data tracking among programs. The authors succeeded in achieving consensus among a diverse group of national transport experts on 12 core neonatal and pediatric transport quality metrics. We propose that transport teams across the country use these metrics to benchmark and guide their quality improvement activities.
Focus measure method based on the modulus of the gradient of the color planes for digital microscopy
NASA Astrophysics Data System (ADS)
Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel
2018-02-01
The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
NASA Astrophysics Data System (ADS)
Kwon, Hyeokjun; Kang, Yoojin; Jang, Junwoo
2017-09-01
Color fidelity has been used as one of indices to evaluate the performance of light sources. Since the Color Rendering Index (CRI) was proposed at CIE, many color fidelity metrics have been proposed to increase the accuracy of the metric. This paper focuses on a comparison of the color fidelity metrics in an aspect of accuracy with human visual assessments. To visually evaluate the color fidelity of light sources, we made a simulator that reproduces the color samples under lighting conditions. In this paper, eighteen color samples of the Macbeth color checker under test light sources and reference illuminant for each of them are simulated and displayed on a well-characterized monitor. With only a spectrum set of the test light source and reference illuminant, color samples under any lighting condition can be reproduced. In this paper, the spectrums of the two LED and two OLED light sources that have similar values of CRI are used for the visual assessment. In addition, the results of the visual assessment are compared with the two color fidelity metrics that include CRI and IES TM-30-15 (Rf), proposed by Illuminating Engineering Society (IES) in 2015. Experimental results indicate that Rf outperforms CRI in terms of the correlation with visual assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
A new field of research, visual analytics, has recently been introduced. This has been defined as “the science of analytical reasoning facilitated by visual interfaces." Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation and dissemination. As researchers begin to develop visual analytic environments, it will be advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work will have on the users who will work in such environments. This paper presents five areas or aspects ofmore » visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined.« less
A Methodology to Analyze Photovoltaic Tracker Uptime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew T; Ruth, Dan
A metric is developed to analyze the daily performance of single-axis photovoltaic (PV) trackers. The metric relies on comparing correlations between the daily time series of the PV power output and an array of simulated plane-of-array irradiances for the given day. Mathematical thresholds and a logic sequence are presented, so the daily tracking metric can be applied in an automated fashion on large-scale PV systems. The results of applying the metric are visually examined against the time series of the power output data for a large number of days and for various systems. The visual inspection results suggest that overall,more » the algorithm is accurate in identifying stuck or functioning trackers on clear-sky days. Visual inspection also shows that there are days that are not classified by the metric where the power output data may be sufficient to identify a stuck tracker. Based on the daily tracking metric, uptime results are calculated for 83 different inverters at 34 PV sites. The mean tracker uptime is calculated at 99% based on 2 different calculation methods. The daily tracking metric clearly has limitations, but as there is no existing metrics in the literature, it provides a valuable tool for flagging stuck trackers.« less
Visual Enhancement of Illusory Phenomenal Accents in Non-Isochronous Auditory Rhythms
2016-01-01
Musical rhythms encompass temporal patterns that often yield regular metrical accents (e.g., a beat). There have been mixed results regarding perception as a function of metrical saliency, namely, whether sensitivity to a deviant was greater in metrically stronger or weaker positions. Besides, effects of metrical position have not been examined in non-isochronous rhythms, or with respect to multisensory influences. This study was concerned with two main issues: (1) In non-isochronous auditory rhythms with clear metrical accents, how would sensitivity to a deviant be modulated by metrical positions? (2) Would the effects be enhanced by multisensory information? Participants listened to strongly metrical rhythms with or without watching a point-light figure dance to the rhythm in the same meter, and detected a slight loudness increment. Both conditions were presented with or without an auditory interference that served to impair auditory metrical perception. Sensitivity to a deviant was found greater in weak beat than in strong beat positions, consistent with the Predictive Coding hypothesis and the idea of metrically induced illusory phenomenal accents. The visual rhythm of dance hindered auditory detection, but more so when the latter was itself less impaired. This pattern suggested that the visual and auditory rhythms were perceptually integrated to reinforce metrical accentuation, yielding more illusory phenomenal accents and thus lower sensitivity to deviants, in a manner consistent with the principle of inverse effectiveness. Results were discussed in the predictive framework for multisensory rhythms involving observed movements and possible mediation of the motor system. PMID:27880850
2014-03-01
BIG DATA THROUGH SHIP MAINTENANCE METRICS ANALYSIS FOR FLEET MAINTENANCE AND REVITALIZATION by Isaac J. Donaldson March 2014 Thesis...March 2014 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE VISUALIZATION OF BIG DATA THROUGH SHIP MAINTENANCE METRICS...terms of the overall performance of ship maintenance processes is clearly a big data problem. The current process for presenting data on the more than
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu
Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, makingmore » it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. Results: There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. Conclusions: The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.« less
CUQI: cardiac ultrasound video quality index
Razaak, Manzoor; Martini, Maria G.
2016-01-01
Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715
Efficiency analysis of color image filtering
NASA Astrophysics Data System (ADS)
Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Abramov, Sergey K.; Egiazarian, Karen O.; Astola, Jaakko T.
2011-12-01
This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.
Immersive Molecular Visualization with Omnidirectional Stereoscopic Ray Tracing and Remote Rendering
Stone, John E.; Sherman, William R.; Schulten, Klaus
2016-01-01
Immersive molecular visualization provides the viewer with intuitive perception of complex structures and spatial relationships that are of critical interest to structural biologists. The recent availability of commodity head mounted displays (HMDs) provides a compelling opportunity for widespread adoption of immersive visualization by molecular scientists, but HMDs pose additional challenges due to the need for low-latency, high-frame-rate rendering. State-of-the-art molecular dynamics simulations produce terabytes of data that can be impractical to transfer from remote supercomputers, necessitating routine use of remote visualization. Hardware-accelerated video encoding has profoundly increased frame rates and image resolution for remote visualization, however round-trip network latencies would cause simulator sickness when using HMDs. We present a novel two-phase rendering approach that overcomes network latencies with the combination of omnidirectional stereoscopic progressive ray tracing and high performance rasterization, and its implementation within VMD, a widely used molecular visualization and analysis tool. The new rendering approach enables immersive molecular visualization with rendering techniques such as shadows, ambient occlusion lighting, depth-of-field, and high quality transparency, that are particularly helpful for the study of large biomolecular complexes. We describe ray tracing algorithms that are used to optimize interactivity and quality, and we report key performance metrics of the system. The new techniques can also benefit many other application domains. PMID:27747138
Software metrics: Software quality metrics for distributed systems. [reliability engineering
NASA Technical Reports Server (NTRS)
Post, J. V.
1981-01-01
Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.
Naturalness preservation image contrast enhancement via histogram modification
NASA Astrophysics Data System (ADS)
Tian, Qi-Chong; Cohen, Laurent D.
2018-04-01
Contrast enhancement is a technique for enhancing image contrast to obtain better visual quality. Since many existing contrast enhancement algorithms usually produce over-enhanced results, the naturalness preservation is needed to be considered in the framework of image contrast enhancement. This paper proposes a naturalness preservation contrast enhancement method, which adopts the histogram matching to improve the contrast and uses the image quality assessment to automatically select the optimal target histogram. The contrast improvement and the naturalness preservation are both considered in the target histogram, so this method can avoid the over-enhancement problem. In the proposed method, the optimal target histogram is a weighted sum of the original histogram, the uniform histogram, and the Gaussian-shaped histogram. Then the structural metric and the statistical naturalness metric are used to determine the weights of corresponding histograms. At last, the contrast-enhanced image is obtained via matching the optimal target histogram. The experiments demonstrate the proposed method outperforms the compared histogram-based contrast enhancement algorithms.
Investigation of iterative image reconstruction in low-dose breast CT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan
2014-06-01
There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.
Exploring the optimum step size for defocus curves.
Wolffsohn, James S; Jinabhai, Amit N; Kingsnorth, Alec; Sheppard, Amy L; Naroo, Shehzad A; Shah, Sunil; Buckhurst, Phillip; Hall, Lee A; Young, Graeme
2013-06-01
To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Midland Eye, Solihull, United Kingdom. Evaluation of a technique. Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Establishing Quantitative Software Metrics in Department of the Navy Programs
2016-04-01
13 Quality to Metrics Dependency Matrix...11 7. Quality characteristics to metrics dependecy matrix...In accomplishing this goal, a need exists for a formalized set of software quality metrics . This document establishes the validity of those necessary
Incorporating Quality of Life Metrics in Interventional Oncology Practice.
Li, David; Madoff, David C
2017-12-01
Interventional radiologists care for a large number of cancer patients with the breadth of palliative intent minimally invasive procedures that we provide. Understanding our meaningful impact on patients' quality of life is essential toward validating our role in the palliation of cancer patients. As such, it is critically important for interventional radiologists to understand common instruments used for the reporting of patient's quality of life measures. Common instruments used to measure pain and quality of life for cancer patients include the numerical rating scale, visual analog scale, brief pain inventory, the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire, and the Functional Assessment of Cancer Therapy. An ideal quality of life instrument should be a patient reported outcome measure across multiple domains (e.g., physical health, psychological, social), and be both validated and reliable.
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Metric Scale Calculation for Visual Mapping Algorithms
NASA Astrophysics Data System (ADS)
Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.
2018-05-01
Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.
A lighting metric for quantitative evaluation of accent lighting systems
NASA Astrophysics Data System (ADS)
Acholo, Cyril O.; Connor, Kenneth A.; Radke, Richard J.
2014-09-01
Accent lighting is critical for artwork and sculpture lighting in museums, and subject lighting for stage, Film and television. The research problem of designing effective lighting in such settings has been revived recently with the rise of light-emitting-diode-based solid state lighting. In this work, we propose an easy-to-apply quantitative measure of the scene's visual quality as perceived by human viewers. We consider a well-accent-lit scene as one which maximizes the information about the scene (in an information-theoretic sense) available to the user. We propose a metric based on the entropy of the distribution of colors, which are extracted from an image of the scene from the viewer's perspective. We demonstrate that optimizing the metric as a function of illumination configuration (i.e., position, orientation, and spectral composition) results in natural, pleasing accent lighting. We use a photorealistic simulation tool to validate the functionality of our proposed approach, showing its successful application to two- and three-dimensional scenes.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Tools for observational gait analysis in patients with stroke: a systematic review.
Ferrarello, Francesco; Bianchi, Valeria Anna Maria; Baccini, Marco; Rubbieri, Gaia; Mossello, Enrico; Cavallini, Maria Chiara; Marchionni, Niccolò; Di Bari, Mauro
2013-12-01
Stroke severely affects walking ability, and assessment of gait kinematics is important in defining diagnosis, planning treatment, and evaluating interventions in stroke rehabilitation. Although observational gait analysis is the most common approach to evaluate gait kinematics, tools useful for this purpose have received little attention in the scientific literature and have not been thoroughly reviewed. The aims of this systematic review were to identify tools proposed to conduct observational gait analysis in adults with a stroke, to summarize evidence concerning their quality, and to assess their implementation in rehabilitation research and clinical practice. An extensive search was performed of original articles reporting on visual/observational tools developed to investigate gait kinematics in adults with a stroke. Two reviewers independently selected studies, extracted data, assessed quality of the included studies, and scored the metric properties and clinical utility of each tool. Rigor in reporting metric properties and dissemination of the tools also was evaluated. Five tools were identified, not all of which had been tested adequately for their metric properties. Evaluation of content validity was partially satisfactory. Reliability was poorly investigated in all but one tool. Concurrent validity and sensitivity to change were shown for 3 and 2 tools, respectively. Overall, adequate levels of quality were rarely reached. The dissemination of the tools was poor. Based on critical appraisal, the Gait Assessment and Intervention Tool shows a good level of quality, and its use in stroke rehabilitation is recommended. Rigorous studies are needed for the other tools in order to establish their usefulness.
Using self-organizing maps to develop ambient air quality classifications: a time series example
2014-01-01
Background Development of exposure metrics that capture features of the multipollutant environment are needed to investigate health effects of pollutant mixtures. This is a complex problem that requires development of new methodologies. Objective Present a self-organizing map (SOM) framework for creating ambient air quality classifications that group days with similar multipollutant profiles. Methods Eight years of day-level data from Atlanta, GA, for ten ambient air pollutants collected at a central monitor location were classified using SOM into a set of day types based on their day-level multipollutant profiles. We present strategies for using SOM to develop a multipollutant metric of air quality and compare results with more traditional techniques. Results Our analysis found that 16 types of days reasonably describe the day-level multipollutant combinations that appear most frequently in our data. Multipollutant day types ranged from conditions when all pollutants measured low to days exhibiting relatively high concentrations for either primary or secondary pollutants or both. The temporal nature of class assignments indicated substantial heterogeneity in day type frequency distributions (~1%-14%), relatively short-term durations (<2 day persistence), and long-term and seasonal trends. Meteorological summaries revealed strong day type weather dependencies and pollutant concentration summaries provided interesting scenarios for further investigation. Comparison with traditional methods found SOM produced similar classifications with added insight regarding between-class relationships. Conclusion We find SOM to be an attractive framework for developing ambient air quality classification because the approach eases interpretation of results by allowing users to visualize classifications on an organized map. The presented approach provides an appealing tool for developing multipollutant metrics of air quality that can be used to support multipollutant health studies. PMID:24990361
NASA Astrophysics Data System (ADS)
Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik
2015-06-01
As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-01-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-09-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.
A two-metric proposal to specify the color-rendering properties of light sources for retail lighting
NASA Astrophysics Data System (ADS)
Freyssinier, Jean Paul; Rea, Mark
2010-08-01
Lighting plays an important role in supporting retail operations, from attracting customers, to enabling the evaluation of merchandise, to facilitating the completion of the sale. Lighting also contributes to the identity, comfort, and visual quality of a retail store. With the increasing availability and quality of white LEDs, retail lighting specifiers are now considering LED lighting in stores. The color rendering of light sources is a key factor in supporting retail lighting goals and thus influences a light source's acceptance by users and specifiers. However, there is limited information on what consumers' color preferences are, and metrics used to describe the color properties of light sources often are equivocal and fail to predict preference. The color rendering of light sources is described in the industry solely by the color rendering index (CRI), which is only indirectly related to human perception. CRI is intended to characterize the appearance of objects illuminated by the source and is increasingly being challenged because new sources are being developed with increasingly exotic spectral power distributions. This paper discusses how CRI might be augmented to better use it in support of the design objectives for retail merchandising. The proposed guidelines include the use of gamut area index as a complementary metric to CRI for assuring good color rendering.
A foreground object features-based stereoscopic image visual comfort assessment model
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.
2014-11-01
Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.
Testing vision with angular and radial multifocal designs using Adaptive Optics.
Vinas, Maria; Dorronsoro, Carlos; Gonzalez, Veronica; Cortes, Daniel; Radhakrishnan, Aiswaryah; Marcos, Susana
2017-03-01
Multifocal vision corrections are increasingly used solutions for presbyopia. In the current study we have evaluated, optically and psychophysically, the quality provided by multizone radial and angular segmented phase designs. Optical and relative visual quality were evaluated using 8 subjects, testing 6 phase designs. Optical quality was evaluated by means of Visual Strehl-based-metrics (VS). The relative visual quality across designs was obtained through a psychophysical paradigm in which images viewed through 210 pairs of phase patterns were perceptually judged. A custom-developed Adaptive Optics (AO) system, including a Hartmann-Shack sensor and an electromagnetic deformable mirror, to measure and correct the eye's aberrations, and a phase-only reflective Spatial Light Modulator, to simulate the phase designs, was developed for this study. The multizone segmented phase designs had 2-4 zones of progressive power (0 to +3D) in either radial or angular distributions. The response of an "ideal observer" purely responding on optical grounds to the same psychophysical test performed on subjects was calculated from the VS curves, and compared with the relative visual quality results. Optical and psychophysical pattern-comparison tests showed that while 2-zone segmented designs (angular & radial) provided better performance for far and near vision, 3- and 4-zone segmented angular designs performed better for intermediate vision. AO-correction of natural aberrations of the subjects modified the response for the different subjects but general trends remained. The differences in perceived quality across the different multifocal patterns are, in a large extent, explained by optical factors. AO is an excellent tool to simulate multifocal refractions before they are manufactured or delivered to the patient, and to assess the effects of the native optics to their performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Stassi, D; Dutta, S; Ma, H; Soderman, A; Pazzani, D; Gros, E; Okerlund, D; Schmidt, T G
2016-01-01
Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.
Stadler, Jennifer G; Donlon, Kipp; Siewert, Jordan D; Franken, Tessa; Lewis, Nathaniel E
2016-06-01
The digitization of a patient's health record has profoundly impacted medicine and healthcare. The compilation and accessibility of medical history has provided clinicians an unprecedented, holistic account of a patient's conditions, procedures, medications, family history, and social situation. In addition to the bedside benefits, this level of information has opened the door for population-level monitoring and research, the results of which can be used to guide initiatives that are aimed at improving quality of care. Cerner Corporation partners with health systems to help guide population management and quality improvement projects. With such an enormous and diverse client base-varying in geography, size, organizational structure, and analytic needs-discerning meaning in the data and how they fit with that particular hospital's goals is a slow, difficult task that requires clinical, statistical, and technical literacy. This article describes the development of dashboards for efficient data visualization at the healthcare facility level. Focusing on two areas with broad clinical importance, sepsis patient outcomes and 30-day hospital readmissions, dashboards were developed with the goal of aggregating data and providing meaningful summary statistics, highlighting critical performance metrics, and providing easily digestible visuals that can be understood by a wide range of personnel with varying levels of skill and areas of expertise. These internal-use dashboards have allowed associates in multiple roles to perform a quick and thorough assessment on a hospital of interest by providing the data to answer necessary questions and to identify important trends or opportunities. This automation of a previously manual process has greatly increased efficiency, saving hours of work time per hospital analyzed. Additionally, the dashboards have standardized the analysis process, ensuring use of the same metrics and processes so that overall themes can be compared across hospitals and health systems.
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
Klaassen, Bart; van Beijnum, Bert-Jan F; Held, Jeremia P; Reenalda, Jasper; van Meulen, Fokke B; Veltink, Peter H; Hermens, Hermie J
2017-01-01
Inertial motion capture systems are used in many applications such as measuring the movement quality in stroke survivors. The absence of clinical effectiveness and usability evidence in these assistive technologies into rehabilitation has delayed the transition of research into clinical practice. Recently, a new inertial motion capture system was developed in a project, called INTERACTION, to objectively measure the quality of movement (QoM) in stroke survivors during daily-life activity. With INTERACTION, we are to be able to investigate into what happens with patients after discharge from the hospital. Resulting QoM metrics, where a metric is defined as a measure of some property, are subsequently presented to care professionals. Metrics include for example: reaching distance, walking speed, and hand distribution plots. The latter shows a density plot of the hand position in the transversal plane. The objective of this study is to investigate the opinions of care professionals in using these metrics obtained from INTERACTION and its usability. By means of a semi-structured interview, guided by a presentation, presenting two patient reports. Each report includes several QoM metric (like reaching distance, hand position density plots, shoulder abduction) results obtained during daily-life measurements and in clinic and were evaluated by care professionals not related to the project. The results were compared with care professionals involved within the INTERACTION project. Furthermore, two questionnaires (5-point Likert and open questionnaire) were handed over to rate the usability of the metrics and to investigate if they would like such a system in their clinic. Eleven interviews were conducted, where each interview included either two or three care professionals as a group, in Switzerland and The Netherlands. Evaluation of the case reports (CRs) by participants and INTERACTION members showed a high correlation for both lower and upper extremity metrics. Participants were most in favor of hand distribution plots during daily-life activities. All participants mentioned that visualizing QoM of stroke survivors over time during daily-life activities has more possibilities compared to current clinical assessments. They also mentioned that these metrics could be important for self-evaluation of stroke survivors. The results showed that most participants were able to understand the metrics presented in the CRs. For a few metrics, it remained difficult to assess the underlying cause of the QoM. Hence, a combination of metrics is needed to get a better insight of the patient. Furthermore, it remains important to report the state (e.g., how the patient feels), its surroundings (outside, inside the house, on a slippery surface), and detail of specific activities (does the patient grasps a piece of paper or a heavy cooking pan but also dual tasks). Altogether, it remains a questions how to determine what the patient is doing and where the patient is doing his or her activities.
Eyetracking Metrics in Young Onset Alzheimer’s Disease: A Window into Cognitive Visual Functions
Pavisic, Ivanna M.; Firth, Nicholas C.; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J.; Yong, Keir X. X.; Slattery, Catherine F.; Paterson, Ross W.; Foulkes, Alexander J. M.; Macpherson, Kirsty; Carton, Amelia M.; Alexander, Daniel C.; Shawe-Taylor, John; Fox, Nick C.; Schott, Jonathan M.; Crutch, Sebastian J.; Primativo, Silvia
2017-01-01
Young onset Alzheimer’s disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD (n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials. PMID:28824534
Eyetracking Metrics in Young Onset Alzheimer's Disease: A Window into Cognitive Visual Functions.
Pavisic, Ivanna M; Firth, Nicholas C; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J; Yong, Keir X X; Slattery, Catherine F; Paterson, Ross W; Foulkes, Alexander J M; Macpherson, Kirsty; Carton, Amelia M; Alexander, Daniel C; Shawe-Taylor, John; Fox, Nick C; Schott, Jonathan M; Crutch, Sebastian J; Primativo, Silvia
2017-01-01
Young onset Alzheimer's disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD ( n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale
Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786
NASA Astrophysics Data System (ADS)
Yao, Xiuya; Chaganti, Shikha; Nabar, Kunal P.; Nelson, Katrina; Plassard, Andrew; Harrigan, Rob L.; Mawn, Louise A.; Landman, Bennett A.
2017-02-01
Eye diseases and visual impairment affect millions of Americans and induce billions of dollars in annual economic burdens. Expounding upon existing knowledge of eye diseases could lead to improved treatment and disease prevention. This research investigated the relationship between structural metrics of the eye orbit and visual function measurements in a cohort of 470 patients from a retrospective study of ophthalmology records for patients (with thyroid eye disease, orbital inflammation, optic nerve edema, glaucoma, intrinsic optic nerve disease), clinical imaging, and visual function assessments. Orbital magnetic resonance imaging (MRI) and computed tomography (CT) images were retrieved and labeled in 3D using multi-atlas label fusion. Based on the 3D structures, both traditional radiology measures (e.g., Barrett index, volumetric crowding index, optic nerve length) and novel volumetric metrics were computed. Using stepwise regression, the associations between structural metrics and visual field scores (visual acuity, functional acuity, visual field, functional field, and functional vision) were assessed. Across all models, the explained variance was reasonable (R2 0.1-0.2) but highly significant (p < 0.001). Instead of analyzing a specific pathology, this study aimed to analyze data across a variety of pathologies. This approach yielded a general model for the connection between orbital structural imaging biomarkers and visual function.
Kireeva, Natalia V; Ovchinnikova, Svetlana I; Kuznetsov, Sergey L; Kazennov, Andrey M; Tsivadze, Aslan Yu
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
NASA Astrophysics Data System (ADS)
Kireeva, Natalia V.; Ovchinnikova, Svetlana I.; Kuznetsov, Sergey L.; Kazennov, Andrey M.; Tsivadze, Aslan Yu.
2014-02-01
This study concerns large margin nearest neighbors classifier and its multi-metric extension as the efficient approaches for metric learning which aimed to learn an appropriate distance/similarity function for considered case studies. In recent years, many studies in data mining and pattern recognition have demonstrated that a learned metric can significantly improve the performance in classification, clustering and retrieval tasks. The paper describes application of the metric learning approach to in silico assessment of chemical liabilities. Chemical liabilities, such as adverse effects and toxicity, play a significant role in drug discovery process, in silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Here, to our knowledge for the first time, a distance-based metric learning procedures have been applied for in silico assessment of chemical liabilities, the impact of metric learning on structure-activity landscapes and predictive performance of developed models has been analyzed, the learned metric was used in support vector machines. The metric learning results have been illustrated using linear and non-linear data visualization techniques in order to indicate how the change of metrics affected nearest neighbors relations and descriptor space.
Establishing Qualitative Software Metrics in Department of the Navy Programs
2015-10-29
dedicated to provide the highest quality software to its users. In doing, there is a need for a formalized set of Software Quality Metrics . The goal...of this paper is to establish the validity of those necessary Quality metrics . In our approach we collected the data of over a dozen programs...provide the necessary variable data for our formulas and tested the formulas for validity. Keywords: metrics ; software; quality I. PURPOSE Space
Valous, Nektarios A; Drakakis, Konstantinos; Sun, Da-Wen
2010-10-01
The visual texture of pork ham slices reveals information about the different qualities and perceived image heterogeneity, which is encapsulated as spatial variations in geometry and spectral characteristics. Detrended Fluctuation Analysis (DFA) detects long-range correlations in nonstationary spatial sequences, by a self-similarity scaling exponent alpha. In the current work, the aim is to investigate the usefulness of alpha, using different colour channels (R, G, B, L*, a*, b*, H, S, V, and Grey), as a quantitative descriptor of visual texture in sliced ham surface patterns for the detection of long-range correlations in unidimensional spatial series of greyscale intensity pixel values at 0 degrees , 30 degrees , 45 degrees , 60 degrees , and 90 degrees rotations. Images were acquired from three qualities of pre-sliced pork ham, typically consumed in Ireland (200 slices per quality). Results indicated that the DFA approach can be used to characterize and quantify the textural appearance of the three ham qualities, for different image orientations, with a global scaling exponent. The spatial series extracted from the ham images display long-range dependence, indicating an average behaviour around 1/f-noise. Results indicate that alpha has a universal character in quantifying the visual texture of ham surface intensity patterns, with no considerable crossovers that alter the behaviour of the fluctuations. Fractal correlation properties can thus be a useful metric for capturing information embedded in the visual texture of hams. Copyright (c) 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
Iqbal, Sahar; Mustansar, Tazeen
2017-03-01
Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found <3. The lowest value for sigma was found for chloride (1.1) at L2. The highest value of sigma was found for creatinine (10.1) at L3. HDL was found with the highest sigma values at both control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value <3 are required strict monitoring and modification in quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.
The data quality analyzer: A quality control program for seismic data
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.
2015-03-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
Tools for monitoring system suitability in LC MS/MS centric proteomic experiments.
Bereman, Michael S
2015-03-01
With advances in liquid chromatography coupled to tandem mass spectrometry technologies combined with the continued goals of biomarker discovery, clinical applications of established biomarkers, and integrating large multiomic datasets (i.e. "big data"), there remains an urgent need for robust tools to assess instrument performance (i.e. system suitability) in proteomic workflows. To this end, several freely available tools have been introduced that monitor a number of peptide identification (ID) and/or peptide ID free metrics. Peptide ID metrics include numbers of proteins, peptides, or peptide spectral matches identified from a complex mixture. Peptide ID free metrics include retention time reproducibility, full width half maximum, ion injection times, and integrated peptide intensities. The main driving force in the development of these tools is to monitor both intra- and interexperiment performance variability and to identify sources of variation. The purpose of this review is to summarize and evaluate these tools based on versatility, automation, vendor neutrality, metrics monitored, and visualization capabilities. In addition, the implementation of a robust system suitability workflow is discussed in terms of metrics, type of standard, and frequency of evaluation along with the obstacles to overcome prior to incorporating a more proactive approach to overall quality control in liquid chromatography coupled to tandem mass spectrometry based proteomic workflows. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Temporal multiplexing to simulate multifocal intraocular lenses: theoretical considerations
Akondi, Vyas; Dorronsoro, Carlos; Gambra, Enrique; Marcos, Susana
2017-01-01
Fast tunable lenses allow an effective design of a portable simultaneous vision simulator (SimVis) of multifocal corrections. A novel method of evaluating the temporal profile of a tunable lens in simulating different multifocal intraocular lenses (M-IOLs) is presented. The proposed method involves the characteristic fitting of the through-focus (TF) optical quality of the multifocal component of a given M-IOL to a linear combination of TF optical quality of monofocal lenses viable with a tunable lens. Three different types of M-IOL designs are tested, namely: segmented refractive, diffractive and refractive extended depth of focus. The metric used for the optical evaluation of the temporal profile is the visual Strehl (VS) ratio. It is shown that the time profiles generated with the VS ratio as a metric in SimVis resulted in TF VS ratio and TF simulated images that closely matched the TF VS ratio and TF simulated images predicted with the M-IOL. The effects of temporal sampling, varying pupil size, monochromatic aberrations, longitudinal chromatic aberrations and temporal dynamics on SimVis are discussed. PMID:28717577
Clustervision: Visual Supervision of Unsupervised Clustering.
Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam
2018-01-01
Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.
Research on quality metrics of wireless adaptive video streaming
NASA Astrophysics Data System (ADS)
Li, Xuefei
2018-04-01
With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.
[Clinical trial data management and quality metrics system].
Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan
2015-11-01
Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.
Effects of Socket Size on Metrics of Socket Fit in Trans-Tibial Prosthesis Users
Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J
2017-01-01
The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8 mm (~6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4wk. Participants’ gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measure as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort scores, and self-reported measures of utility, satisfaction, and residual limb health. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. PMID:28373013
Effects of socket size on metrics of socket fit in trans-tibial prosthesis users.
Sanders, Joan E; Youngblood, Robert T; Hafner, Brian J; Cagle, John C; McLean, Jake B; Redd, Christian B; Dietrich, Colin R; Ciol, Marcia A; Allyn, Katheryn J
2017-06-01
The purpose of this research was to conduct a preliminary effort to identify quantitative metrics to distinguish a good socket from an oversized socket in people with trans-tibial amputation. Results could be used to inform clinical practices related to socket replacement. A cross-over study was conducted on community ambulators (K-level 3 or 4) with good residual limb sensation. Participants were each provided with two sockets, a duplicate of their as-prescribed socket and a modified socket that was enlarged or reduced by 1.8mm (∼6% of the socket volume) based on the fit quality of the as-prescribed socket. The two sockets were termed a larger socket and a smaller socket. Activity was monitored while participants wore each socket for 4 weeks. Participants' gait; self-reported satisfaction, quality of fit, and performance; socket comfort; and morning-to-afternoon limb fluid volume changes were assessed. Visual analysis of plots and estimated effect sizes (measured as mean difference divided by standard deviation) showed largest effects for step time asymmetry, step width asymmetry, anterior and anterior-distal morning-to-afternoon fluid volume change, socket comfort score, and self-reported utility. These variables may be viable metrics for early detection of deterioration in socket fit, and should be tested in a larger clinical study. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten
2014-07-01
Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Multimodal assessment of visual attention using the Bethesda Eye & Attention Measure (BEAM).
Ettenhofer, Mark L; Hershaw, Jamie N; Barry, David M
2016-01-01
Computerized cognitive tests measuring manual response time (RT) and errors are often used in the assessment of visual attention. Evidence suggests that saccadic RT and errors may also provide valuable information about attention. This study was conducted to examine a novel approach to multimodal assessment of visual attention incorporating concurrent measurements of saccadic eye movements and manual responses. A computerized cognitive task, the Bethesda Eye & Attention Measure (BEAM) v.34, was designed to evaluate key attention networks through concurrent measurement of saccadic and manual RT and inhibition errors. Results from a community sample of n = 54 adults were analyzed to examine effects of BEAM attention cues on manual and saccadic RT and inhibition errors, internal reliability of BEAM metrics, relationships between parallel saccadic and manual metrics, and relationships of BEAM metrics to demographic characteristics. Effects of BEAM attention cues (alerting, orienting, interference, gap, and no-go signals) were consistent with previous literature examining key attention processes. However, corresponding saccadic and manual measurements were weakly related to each other, and only manual measurements were related to estimated verbal intelligence or years of education. This study provides preliminary support for the feasibility of multimodal assessment of visual attention using the BEAM. Results suggest that BEAM saccadic and manual metrics provide divergent measurements. Additional research will be needed to obtain comprehensive normative data, to cross-validate BEAM measurements with other indicators of neural and cognitive function, and to evaluate the utility of these metrics within clinical populations of interest.
Local coding based matching kernel method for image classification.
Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong
2014-01-01
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
An objective method for a video quality evaluation in a 3DTV service
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2015-09-01
The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.
Jarc, Anthony M; Curet, Myriam J
2017-03-01
Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.
An Underwater Color Image Quality Evaluation Metric.
Yang, Miao; Sowmya, Arcot
2015-12-01
Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.
NASA Astrophysics Data System (ADS)
Qiu, Guoping; Kheiri, Ahmed
2011-01-01
Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.
A no-reference video quality assessment metric based on ROI
NASA Astrophysics Data System (ADS)
Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan
2015-01-01
A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.
NASA Technical Reports Server (NTRS)
Basili, V. R.
1981-01-01
Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.
Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2015-10-01
Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.
Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.
Development of a perceptually calibrated objective metric of noise
NASA Astrophysics Data System (ADS)
Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey
2011-01-01
A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
Adams, Russell; Quinn, Paul F; Perks, Matthew; Barber, Nicholas J; Jonczyk, Jennine; Owen, Gareth J
2016-12-01
High resolution water quality data has recently become widely available from numerous catchment based monitoring schemes. However, the models that can reproduce time series of concentrations or fluxes have not kept pace with the advances in monitoring data. Model performance at predicting phosphorus (P) and sediment concentrations has frequently been poor with models not fit for purpose except for predicting annual losses. Here, the data from the Eden Demonstration Test Catchments (DTC) project have been used to calibrate the Catchment Runoff Attenuation Flux Tool (CRAFT), a new, parsimonious model developed with the aim of modelling both the generation and attenuation of nutrients and sediments in small to medium sized catchments. The CRAFT has the ability to run on an hourly timestep and can calculate the mass of sediments and nutrients transported by three flow pathways representing rapid surface runoff, fast subsurface drainage and slow groundwater flow (baseflow). The attenuation feature of the model is introduced here; this enables surface runoff and contaminants transported via this pathway to be delayed in reaching the catchment outlet. It was used to investigate some hypotheses of nutrient and sediment transport in the Newby Beck Catchment (NBC) Model performance was assessed using a suite of metrics including visual best fit and the Nash-Sutcliffe efficiency. It was found that this approach for water quality models may be the best assessment method as opposed to using a single metric. Furthermore, it was found that, when the aim of the simulations was to reproduce the time series of total P (TP) or total reactive P (TRP) to get the best visual fit, that attenuation was required. The model will be used in the future to explore the impacts on water quality of different mitigation options in the catchment; these will include attenuation of surface runoff. Copyright © 2016 Elsevier B.V. All rights reserved.
Gatidis, Sergios; Würslin, Christian; Seith, Ferdinand; Schäfer, Jürgen F; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina F; Schmidt, Holger
2016-01-01
Optimization of tracer dose regimes in positron emission tomography (PET) imaging is a trade-off between diagnostic image quality and radiation exposure. The challenge lies in defining minimal tracer doses that still result in sufficient diagnostic image quality. In order to find such minimal doses, it would be useful to simulate tracer dose reduction as this would enable to study the effects of tracer dose reduction on image quality in single patients without repeated injections of different amounts of tracer. The aim of our study was to introduce and validate a method for simulation of low-dose PET images enabling direct comparison of different tracer doses in single patients and under constant influencing factors. (18)F-fluoride PET data were acquired on a combined PET/magnetic resonance imaging (MRI) scanner. PET data were stored together with the temporal information of the occurrence of single events (list-mode format). A predefined proportion of PET events were then randomly deleted resulting in undersampled PET data. These data sets were subsequently reconstructed resulting in simulated low-dose PET images (retrospective undersampling of list-mode data). This approach was validated in phantom experiments by visual inspection and by comparison of PET quality metrics contrast recovery coefficient (CRC), background-variability (BV) and signal-to-noise ratio (SNR) of measured and simulated PET images for different activity concentrations. In addition, reduced-dose PET images of a clinical (18)F-FDG PET dataset were simulated using the proposed approach. (18)F-PET image quality degraded with decreasing activity concentrations with comparable visual image characteristics in measured and in corresponding simulated PET images. This result was confirmed by quantification of image quality metrics. CRC, SNR and BV showed concordant behavior with decreasing activity concentrations for measured and for corresponding simulated PET images. Simulation of dose-reduced datasets based on clinical (18)F-FDG PET data demonstrated the clinical applicability of the proposed data. Simulation of PET tracer dose reduction is possible with retrospective undersampling of list-mode data. Resulting simulated low-dose images have equivalent characteristics with PET images actually measured at lower doses and can be used to derive optimal tracer dose regimes.
A guide to calculating habitat-quality metrics to inform conservation of highly mobile species
Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.
2018-01-01
Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic processes and population dynamics, and require the most data.More real-world examples applying occupancy-based, agent-based, and continuous-based metrics to seasonally migratory species are needed to better understand challenges and opportunities for applying these metrics more broadly.
Perception and Attention for Visualization
ERIC Educational Resources Information Center
Haroz, Steve
2013-01-01
This work examines how a better understanding of visual perception and attention can impact visualization design. In a collection of studies, I explore how different levels of the visual system can measurably affect a variety of visualization metrics. The results show that expert preference, user performance, and even computational performance are…
NASA Astrophysics Data System (ADS)
Siddiqui, Khan M.; Siegel, Eliot L.; Reiner, Bruce I.; Johnson, Jeffrey P.
2005-04-01
The authors identify a fundamental disconnect between the ways in which industry and radiologists assess and even discuss product performance. What is needed is a quantitative methodology that can assess both subjective image quality and observer task performance. In this study, we propose and evaluate the use of a visual discrimination model (VDM) that assesses just-noticeable differences (JNDs) to serve this purpose. The study compares radiologists' subjective perceptions of image quality of computer tomography (CT) and computed radiography (CR) images with quantitative measures of peak signal-to-noise ratio (PSNR) and JNDs as measured by a VDM. The study included 4 CT and 6 CR studies with compression ratios ranging from lossless to 90:1 (total of 80 sets of images were generated [n = 1,200]). Eleven radiologists reviewed the images and rated them in terms of overall quality and readability and identified images not acceptable for interpretation. Normalized reader scores were correlated with compression, objective PSNR, and mean JND values. Results indicated a significantly higher correlation between observer performance and JND values than with PSNR methods. These results support the use of the VDM as a metric not only for the threshold discriminations for which it was calibrated, but also as a general image quality metric. This VDM is a highly promising, reproducible, and reliable adjunct or even alternative to human observer studies for research or to establish clinical guidelines for image compression, dose reductions, and evaluation of various display technologies.
Hirsch, Irl B; Balo, Andrew K; Sayer, Kevin; Garcia, Arturo; Buckingham, Bruce A; Peyser, Thomas A
2017-06-01
The potential clinical benefits of continuous glucose monitoring (CGM) have been recognized for many years, but CGM is used by a small fraction of patients with diabetes. One obstacle to greater use of the technology is the lack of simplified tools for assessing glycemic control from CGM data without complicated visual displays of data. We developed a simple new metric, the personal glycemic state (PGS), to assess glycemic control solely from continuous glucose monitoring data. PGS is a composite index that assesses four domains of glycemic control: mean glucose, glycemic variability, time in range and frequency and severity of hypoglycemia. The metric was applied to data from six clinical studies for the G4 Platinum continuous glucose monitoring system (Dexcom, San Diego, CA). The PGS was also applied to data from a study of artificial pancreas comparing results from open loop and closed loop in adolescents and in adults. The new metric for glycemic control, PGS, was able to characterize the quality of glycemic control in a wide range of study subjects with various mean glucose, minimal, moderate, and excessive glycemic variability and subjects on open loop versus closed loop control. A new composite metric for the assessment of glycemic control based on CGM data has been defined for use in assessing glycemic control in clinical practice and research settings. The new metric may help rapidly identify problems in glycemic control and may assist with optimizing diabetes therapy during time-constrained physician office visits.
DEVELOPMENT AND APPLICAIONS OF A STANDARD VISUAL INDEX
A standard visual index appropriate for characterizing visibility through uniform hazes, is defined in terms of either of the traditional metrics: visual range or extinction coefficient. This index was designed to be linear with respect to perceived visual changes over its entire...
Rolls, Edmund T; Mills, W Patrick C
2018-05-01
When objects transform into different views, some properties are maintained, such as whether the edges are convex or concave, and these non-accidental properties are likely to be important in view-invariant object recognition. The metric properties, such as the degree of curvature, may change with different views, and are less likely to be useful in object recognition. It is shown that in a model of invariant visual object recognition in the ventral visual stream, VisNet, non-accidental properties are encoded much more than metric properties by neurons. Moreover, it is shown how with the temporal trace rule training in VisNet, non-accidental properties of objects become encoded by neurons, and how metric properties are treated invariantly. We also show how VisNet can generalize between different objects if they have the same non-accidental property, because the metric properties are likely to overlap. VisNet is a 4-layer unsupervised model of visual object recognition trained by competitive learning that utilizes a temporal trace learning rule to implement the learning of invariance using views that occur close together in time. A second crucial property of this model of object recognition is, when neurons in the level corresponding to the inferior temporal visual cortex respond selectively to objects, whether neurons in the intermediate layers can respond to combinations of features that may be parts of two or more objects. In an investigation using the four sides of a square presented in every possible combination, it was shown that even though different layer 4 neurons are tuned to encode each feature or feature combination orthogonally, neurons in the intermediate layers can respond to features or feature combinations present is several objects. This property is an important part of the way in which high capacity can be achieved in the four-layer ventral visual cortical pathway. These findings concerning non-accidental properties and the use of neurons in intermediate layers of the hierarchy help to emphasise fundamental underlying principles of the computations that may be implemented in the ventral cortical visual stream used in object recognition. Copyright © 2018 Elsevier Inc. All rights reserved.
Influence on Visual Quality of Intraoperative Orientation of Asymmetric Intraocular Lenses.
Bonaque-González, Sergio; Ríos, Susana; Amigó, Alfredo; López-Gil, Norberto
2015-10-01
To evaluate visual quality when changing the intraocular orientation of the Lentis Mplus LS-312MF nonrotational symmetric +3.00 diopters aspheric multifocal intraocular lens ([IOL] Oculentis GmbH, Berlin, Germany) in normal eyes. An artificial eye was used to measure the in vitro wavefront of the IOL. The corneal topography of 20 healthy patients was obtained. For each eye, a computational analysis simulated the implantation of the IOL. The modulation transfer function (MTF) and an image quality parameter (visually modulated transfer function [VSMTF] metric) were calculated for a 5.0-mm pupil and for three conditions: distance, intermediate, and near vision. The procedure was repeated for each eye after a rotation of the IOL with respect to the cornea from 0° to 360° in 1° steps. Statistical analysis showed significant differences in mean VSMTF values between orientations for distance vision. Optimal orientation of the IOL (different for each eye) showed a mean improvement of 58% ± 19% (range: 20% to 121%) in VSMTF values with respect to the worst possible orientation. For these orientations, intermediate and near vision quality were statistically indistinguishable. The MTFs were different between orientations, showing a mean difference of approximately 5 cycles per degree in the maximum spatial frequencies that can be transferred between the best and the worst orientations for distance vision. The results suggest that implantation of this nonrotational symmetric IOL should improve visual outcomes if it is oriented to coincide with a customized meridian. A simple, practical method is proposed to find an approximation to the angle that an Mplus IOL should be inserted. Copyright 2015, SLACK Incorporated.
Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.
Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li
2011-04-01
New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.
Defining quality metrics and improving safety and outcome in allergy care.
Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J
2014-04-01
The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.
Registering Ground and Satellite Imagery for Visual Localization
2012-08-01
reckoning, inertial, stereo, light detection and ranging ( LIDAR ), cellular radio, and visual. As no sensor or algorithm provides perfect localization in...by metric localization approaches to confine the region of a map that needs to be searched. Simultaneous Localization and Mapping ( SLAM ) (5, 6), using...estimate the metric location of the camera. Se et al. (7) use SIFT features for both appearance-based global localization and incremental 3D SLAM . Johns and
2014-01-01
Background Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. Results We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Conclusions Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation. PMID:25350128
Zhang, Wenchao; Zhao, Patrick X
2014-01-01
Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation.
Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.
Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B
2017-12-01
In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were able to be developed for use in the ACC's quality efforts for ambulatory practice. © 2017 Wiley Periodicals, Inc.
Narayan, Anand; Cinelli, Christina; Carrino, John A; Nagy, Paul; Coresh, Josef; Riese, Victoria G; Durand, Daniel J
2015-11-01
As the US health care system transitions toward value-based reimbursement, there is an increasing need for metrics to quantify health care quality. Within radiology, many quality metrics are in use, and still more have been proposed, but there have been limited attempts to systematically inventory these measures and classify them using a standard framework. The purpose of this study was to develop an exhaustive inventory of public and private sector imaging quality metrics classified according to the classic Donabedian framework (structure, process, and outcome). A systematic review was performed in which eligibility criteria included published articles (from 2000 onward) from multiple databases. Studies were double-read, with discrepancies resolved by consensus. For the radiology benefit management group (RBM) survey, the six known companies nationally were surveyed. Outcome measures were organized on the basis of standard categories (structure, process, and outcome) and reported using Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search strategy yielded 1,816 citations; review yielded 110 reports (29 included for final analysis). Three of six RBMs (50%) responded to the survey; the websites of the other RBMs were searched for additional metrics. Seventy-five unique metrics were reported: 35 structure (46%), 20 outcome (27%), and 20 process (27%) metrics. For RBMs, 35 metrics were reported: 27 structure (77%), 4 process (11%), and 4 outcome (11%) metrics. The most commonly cited structure, process, and outcome metrics included ACR accreditation (37%), ACR Appropriateness Criteria (85%), and peer review (95%), respectively. Imaging quality metrics are more likely to be structural (46%) than process (27%) or outcome (27%) based (P < .05). As national value-based reimbursement programs increasingly emphasize outcome-based metrics, radiologists must keep pace by developing the data infrastructure required to collect outcome-based quality metrics. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Evaluating which plan quality metrics are appropriate for use in lung SBRT.
Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A
2018-02-01
Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p < 0.0001). Gradient measures strongly correlated with target volume (p < 0.0001). The RTOG lung SBRT protocol advocated conformity guidelines for prescribed dose in all categories were met in ≥94% of cases. The proportion of total lung volume receiving doses of 20 Gy and 5 Gy (V 20 and V 5 ) were mean 4.8% (±3.2) and 16.4% (±9.2), respectively. Based on our study analyses, we recommend the following metrics as appropriate surrogates for establishing SBRT lung plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.
The role of extra-foveal processing in 3D imaging
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2017-03-01
The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).
Colonoscopy Quality: Metrics and Implementation
Calderwood, Audrey H.; Jacobson, Brian C.
2013-01-01
Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862
A software quality model and metrics for risk assessment
NASA Technical Reports Server (NTRS)
Hyatt, L.; Rosenberg, L.
1996-01-01
A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.
MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathiaseelan, V; Thomadsen, B
Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure themore » delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability metrics: Different cultures/practices affecting the effectiveness of methods and metrics. Show examples of quality assurance workflows, Statistical process control, that monitor the treatment planning and delivery process to identify errors. To learn to identify and prioritize risks and QA procedures in radiation oncology. Try to answer the question: Can a quality assurance program aided by quality assurance metrics help minimize errors and ensure safe treatment delivery. Should such metrics be institution specific.« less
The data quality analyzer: a quality control program for seismic data
Ringler, Adam; Hagerty, M.T.; Holland, James F.; Gonzales, A.; Gee, Lind S.; Edwards, J.D.; Wilson, David; Baker, Adam
2015-01-01
The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a “grade” for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
A condition metric for Eucalyptus woodland derived from expert evaluations.
Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D
2018-02-01
The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.
Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H
2016-02-24
Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention effect. The data will be used to design a large-scale randomised controlled trial to evaluate fully the Visual Rehabilitation Officer intervention. A rigorous evaluation of Rehabilitation Officer input is vital to direct a future low vision rehabilitation strategy and to help direct government resources. The trial was registered with ( ISRCTN44807874 ) on 9 March 2015.
Geographic techniques and recent applications of remote sensing to landscape-water quality studies
Griffith, J.A.
2002-01-01
This article overviews recent advances in studies of landscape-water quality relationships using remote sensing techniques. With the increasing feasibility of using remotely-sensed data, landscape-water quality studies can now be more easily performed on regional, multi-state scales. The traditional method of relating land use and land cover to water quality has been extended to include landscape pattern and other landscape information derived from satellite data. Three items are focused on in this article: 1) the increasing recognition of the importance of larger-scale studies of regional water quality that require a landscape perspective; 2) the increasing importance of remotely sensed data, such as the imagery-derived normalized difference vegetation index (NDVI) and vegetation phenological metrics derived from time-series NDVI data; and 3) landscape pattern. In some studies, using landscape pattern metrics explained some of the variation in water quality not explained by land use/cover. However, in some other studies, the NDVI metrics were even more highly correlated to certain water quality parameters than either landscape pattern metrics or land use/cover proportions. Although studies relating landscape pattern metrics to water quality have had mixed results, this recent body of work applying these landscape measures and satellite-derived metrics to water quality analysis has demonstrated their potential usefulness in monitoring watershed conditions across large regions.
Route visualization using detail lenses.
Karnick, Pushpak; Cline, David; Jeschke, Stefan; Razdan, Anshuman; Wonka, Peter
2010-01-01
We present a method designed to address some limitations of typical route map displays of driving directions. The main goal of our system is to generate a printable version of a route map that shows the overview and detail views of the route within a single, consistent visual frame. Our proposed visualization provides a more intuitive spatial context than a simple list of turns. We present a novel multifocus technique to achieve this goal, where the foci are defined by points of interest (POI) along the route. A detail lens that encapsulates the POI at a finer geospatial scale is created for each focus. The lenses are laid out on the map to avoid occlusion with the route and each other, and to optimally utilize the free space around the route. We define a set of layout metrics to evaluate the quality of a lens layout for a given route map visualization. We compare standard lens layout methods to our proposed method and demonstrate the effectiveness of our method in generating aesthetically pleasing layouts. Finally, we perform a user study to evaluate the effectiveness of our layout choices.
Operational Support for Instrument Stability through ODI-PPA Metadata Visualization and Analysis
NASA Astrophysics Data System (ADS)
Young, M. D.; Hayashi, S.; Gopu, A.; Kotulla, R.; Harbeck, D.; Liu, W.
2015-09-01
Over long time scales, quality assurance metrics taken from calibration and calibrated data products can aid observatory operations in quantifying the performance and stability of the instrument, and identify potential areas of concern or guide troubleshooting and engineering efforts. Such methods traditionally require manual SQL entries, assuming the requisite metadata has even been ingested into a database. With the ODI-PPA system, QA metadata has been harvested and indexed for all data products produced over the life of the instrument. In this paper we will describe how, utilizing the industry standard Highcharts Javascript charting package with a customized AngularJS-driven user interface, we have made the process of visualizing the long-term behavior of these QA metadata simple and easily replicated. Operators can easily craft a custom query using the powerful and flexible ODI-PPA search interface and visualize the associated metadata in a variety of ways. These customized visualizations can be bookmarked, shared, or embedded externally, and will be dynamically updated as new data products enter the system, enabling operators to monitor the long-term health of their instrument with ease.
Bellucci, Christopher J; Becker, Mary E; Beauchene, Mike; Dunbar, Lee
2013-06-01
Bioassessments have formed the foundation of many water quality monitoring programs throughout the United States. Like many state water quality programs, Connecticut has developed a relational database containing information about species richness, species composition, relative abundance, and feeding relationships among macroinvertebrates present in stream and river systems. Geographic Information Systems can provide estimates of landscape condition and watershed characteristics and when combined with measurements of stream biology, provide a useful visual display of information that is useful in a management context. The objective of our study was to estimate the stream health for all wadeable stream kilometers in Connecticut using a combination of macroinvertebrate metrics and landscape variables. We developed and evaluated models using an information theoretic approach to predict stream health as measured by macroinvertebrate multimetric index (MMI) and identified the best fitting model as a three variable model, including percent impervious land cover, a wetlands metric, and catchment slope that best fit the MMI scores (adj-R (2) = 0.56, SE = 11.73). We then provide examples of how modeling can augment existing programs to support water management policies under the Federal Clean Water Act such as stream assessments and anti-degradation.
VizioMetrics: Mining the Scientific Visual Literature
ERIC Educational Resources Information Center
Lee, Po-Shen
2017-01-01
Scientific results are communicated visually in the literature through diagrams, visualizations, and photographs. In this thesis, we developed a figure processing pipeline to classify more than 8 million figures from PubMed Central into different figure types and study the resulting patterns of visual information as they relate to scholarly…
Information-Theoretic Metrics for Visualizing Gene-Environment Interactions
Chanda, Pritam ; Zhang, Aidong ; Brazeau, Daniel ; Sucheston, Lara ; Freudenheim, Jo L. ; Ambrosone, Christine ; Ramanathan, Murali
2007-01-01
The purpose of our work was to develop heuristics for visualizing and interpreting gene-environment interactions (GEIs) and to assess the dependence of candidate visualization metrics on biological and study-design factors. Two information-theoretic metrics, the k-way interaction information (KWII) and the total correlation information (TCI), were investigated. The effectiveness of the KWII and TCI to detect GEIs in a diverse range of simulated data sets and a Crohn disease data set was assessed. The sensitivity of the KWII and TCI spectra to biological and study-design variables was determined. Head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and the pedigree disequilibrium test (PDT) methods were obtained. The KWII and TCI spectra, which are graphical summaries of the KWII and TCI for each subset of environmental and genotype variables, were found to detect each known GEI in the simulated data sets. The patterns in the KWII and TCI spectra were informative for factors such as case-control misassignment, locus heterogeneity, allele frequencies, and linkage disequilibrium. The KWII and TCI spectra were found to have excellent sensitivity for identifying the key disease-associated genetic variations in the Crohn disease data set. In head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and PDT methods, the results from visual interpretation of the KWII and TCI spectra performed satisfactorily. The KWII and TCI are promising metrics for visualizing GEIs. They are capable of detecting interactions among numerous single-nucleotide polymorphisms and environmental variables for a diverse range of GEI models. PMID:17924337
The Albuquerque Seismological Laboratory Data Quality Analyzer
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M.; Holland, J.; Gee, L. S.; Wilson, D.
2013-12-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several efforts underway to improve data quality at its stations. The Data Quality Analyzer (DQA) is one such development. The DQA is designed to characterize station data quality in a quantitative and automated manner. Station quality is based on the evaluation of various metrics, such as timing quality, noise levels, sensor coherence, and so on. These metrics are aggregated into a measurable grade for each station. The DQA consists of a website, a metric calculator (Seedscan), and a PostgreSQL database. The website allows the user to make requests for various time periods, review specific networks and stations, adjust weighting of the station's grade, and plot metrics as a function of time. The website dynamically loads all station data from a PostgreSQL database. The database is central to the application; it acts as a hub where metric values and limited station descriptions are stored. Data is stored at the level of one sensor's channel per day. The database is populated by Seedscan. Seedscan reads and processes miniSEED data, to generate metric values. Seedscan, written in Java, compares hashes of metadata and data to detect changes and perform subsequent recalculations. This ensures that the metric values are up to date and accurate. Seedscan can be run in a scheduled task or on demand by way of a config file. It will compute metrics specified in its configuration file. While many metrics are currently in development, some are completed and being actively used. These include: availability, timing quality, gap count, deviation from the New Low Noise Model, deviation from a station's noise baseline, inter-sensor coherence, and data-synthetic fits. In all, 20 metrics are planned, but any number could be added. ASL is actively using the DQA on a daily basis for station diagnostics and evaluation. As Seedscan is scheduled to run every night, data quality analysts are able to then use the website to diagnose changes in noise levels or other anomalous data. This allows for errors to be corrected quickly and efficiently. The code is designed to be flexible for adding metrics and portable for use in other networks. We anticipate further development of the DQA by improving the existing web-interface, adding more metrics, adding an interface to facilitate the verification of historic station metadata and performance, and an interface to allow better monitoring of data quality goals.
Visual tuning and metrical perception of realistic point-light dance movements.
Su, Yi-Huang
2016-03-07
Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals' preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory 'subdivision effect', suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception.
Strand, Julia F
2014-03-01
A widely agreed-upon feature of spoken word recognition is that multiple lexical candidates in memory are simultaneously activated in parallel when a listener hears a word, and that those candidates compete for recognition (Luce, Goldinger, Auer, & Vitevitch, Perception 62:615-625, 2000; Luce & Pisoni, Ear and Hearing 19:1-36, 1998; McClelland & Elman, Cognitive Psychology 18:1-86, 1986). Because the presence of those competitors influences word recognition, much research has sought to quantify the processes of lexical competition. Metrics that quantify lexical competition continuously are more effective predictors of auditory and visual (lipread) spoken word recognition than are the categorical metrics traditionally used (Feld & Sommers, Speech Communication 53:220-228, 2011; Strand & Sommers, Journal of the Acoustical Society of America 130:1663-1672, 2011). A limitation of the continuous metrics is that they are somewhat computationally cumbersome and require access to existing speech databases. This article describes the Phi-square Lexical Competition Database (Phi-Lex): an online, searchable database that provides access to multiple metrics of auditory and visual (lipread) lexical competition for English words, available at www.juliastrand.com/phi-lex .
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
Software metrics: The key to quality software on the NCC project
NASA Technical Reports Server (NTRS)
Burns, Patricia J.
1993-01-01
Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.
Applying Sigma Metrics to Reduce Outliers.
Litten, Joseph
2017-03-01
Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Poisson, Sharon N.; Josephson, S. Andrew
2011-01-01
Stroke is a major public health burden, and accounts for many hospitalizations each year. Due to gaps in practice and recommended guidelines, there has been a recent push toward implementing quality measures to be used for improving patient care, comparing institutions, as well as for rewarding or penalizing physicians through pay-for-performance. This article reviews the major organizations involved in implementing quality metrics for stroke, and the 10 major metrics currently being tracked. We also discuss possible future metrics and the implications of public reporting and using metrics for pay-for-performance. PMID:23983840
Tee, James J L; Yang, Yesa; Kalitzeos, Angelos; Webster, Andrew; Bainbridge, James; Weleber, Richard G; Michaelides, Michel
2018-05-01
To characterize bilateral visual function, interocular variability and progression by using static perimetry-derived volumetric and pointwise metrics in subjects with retinitis pigmentosa associated with mutations in the retinitis pigmentosa GTPase regulator (RPGR) gene. This was a prospective longitudinal observational study of 47 genetically confirmed subjects. Visual function was assessed with ETDRS and Pelli-Robson charts; and Octopus 900 static perimetry using a customized, radially oriented 185-point grid. Three-dimensional hill-of-vision topographic models were produced and interrogated with the Visual Field Modeling and Analysis software to obtain three volumetric metrics: VTotal, V30, and V5. These were analyzed together with Octopus mean sensitivity values. Interocular differences were assessed with the Bland-Altman method. Metric-specific exponential decline rates were calculated. Baseline symmetry was demonstrated by relative interocular difference values of 1% for VTotal and 8% with V30. Degree of symmetry varied between subjects and was quantified with the subject percentage interocular difference (SPID). SPID was 16% for VTotal and 17% for V30. Interocular symmetry in progression was greatest when quantified by VTotal and V30, with 73% and 64% of subjects possessing interocular rate differences smaller in magnitude than respective annual progression rates. Functional decline was evident with increasing age. An overall annual exponential decline of 6% was evident with both VTotal and V30. In general, good interocular symmetry exists; however, there was both variation between subjects and with the use of various metrics. Our findings will guide patient selection and design of RPGR treatment trials, and provide clinicians with specific prognostic information to offer patients affected by this condition.
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
QualityML: a dictionary for quality metadata encoding
NASA Astrophysics Data System (ADS)
Ninyerola, Miquel; Sevillano, Eva; Serral, Ivette; Pons, Xavier; Zabala, Alaitz; Bastin, Lucy; Masó, Joan
2014-05-01
The scenario of rapidly growing geodata catalogues requires tools focused on facilitate users the choice of products. Having quality fields populated in metadata allow the users to rank and then select the best fit-for-purpose products. In this direction, we have developed the QualityML (http://qualityml.geoviqua.org), a dictionary that contains hierarchically structured concepts to precisely define and relate quality levels: from quality classes to quality measurements. Generically, a quality element is the path that goes from the higher level (quality class) to the lowest levels (statistics or quality metrics). This path is used to encode quality of datasets in the corresponding metadata schemas. The benefits of having encoded quality, in the case of data producers, are related with improvements in their product discovery and better transmission of their characteristics. In the case of data users, particularly decision-makers, they would find quality and uncertainty measures to take the best decisions as well as perform dataset intercomparison. Also it allows other components (such as visualization, discovery, or comparison tools) to be quality-aware and interoperable. On one hand, the QualityML is a profile of the ISO geospatial metadata standards providing a set of rules for precisely documenting quality indicator parameters that is structured in 6 levels. On the other hand, QualityML includes semantics and vocabularies for the quality concepts. Whenever possible, if uses statistic expressions from the UncertML dictionary (http://www.uncertml.org) encoding. However it also extends UncertML to provide list of alternative metrics that are commonly used to quantify quality. A specific example, based on a temperature dataset, is shown below. The annual mean temperature map has been validated with independent in-situ measurements to obtain a global error of 0.5 ° C. Level 0: Quality class (e.g., Thematic accuracy) Level 1: Quality indicator (e.g., Quantitative attribute correctness) Level 2: Measurement field (e.g., DifferentialErrors1D) Level 3: Statistic or Metric (e.g., Half-lengthConfidenceInterval) Level 4: Units (e.g. Celsius degrees) Level 5: Value (e.g.0.5) Level 6: Specifications. Additional information on how the measurement took place, citation of the reference data, the traceability of the process and a publication describing the validation process encoded using new 19157 elements or the GeoViQua (http://www.geoviqua.org) Quality Model (PQM-UQM) extensions to the ISO models. Finally, keep in mind, that QualityML is not just suitable for encoding dataset level but also considers pixel and object level uncertainties. This is done by link the metadata quality descriptions with layers representing not just the data but the uncertainty values associated with each geospatial element.
Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu
2018-01-01
Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.
National evaluation of multidisciplinary quality metrics for head and neck cancer.
Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep
2017-11-15
The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
Software Quality Metrics Enhancements. Volume 1
1980-04-01
the mathematical relationships which relate metrics to ratings of the various quality factors) for factors which were not validated previously were...function, provides a mathematical relationship between the metrics and the quality factors. (3) Validation of these normalization functions was performed by...samples, further research is needed before a high degree of confidence can be placed on the mathematical relationships established to date l (3.3.3) 6
Developing and evaluating a target-background similarity metric for camouflage detection.
Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong
2014-01-01
Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.
Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer
2016-01-01
Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.
Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.
2015-01-01
Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip
2017-06-01
Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.
Testing, Requirements, and Metrics
NASA Technical Reports Server (NTRS)
Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William
1998-01-01
The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.
Driving photomask supplier quality through automation
NASA Astrophysics Data System (ADS)
Russell, Drew; Espenscheid, Andrew
2007-10-01
In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.
Sensitivity of the lane change test as a measure of in-vehicle system demand.
Young, Kristie L; Lenné, Michael G; Williamson, Amy R
2011-05-01
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A
2014-01-01
Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in 1 of the 5 patient studies, and with external-surrogate based correction in 3 out of 5 patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. PMID:24107647
Comparison of two laboratory-based systems for evaluation of halos in intraocular lenses
Alexander, Elsinore; Wei, Xin; Lee, Shinwook
2018-01-01
Purpose Multifocal intraocular lenses (IOLs) can be associated with unwanted visual phenomena, including halos. Predicting potential for halos is desirable when designing new multifocal IOLs. Halo images from 6 IOL models were compared using the Optikos modulation transfer function bench system and a new high dynamic range (HDR) system. Materials and methods One monofocal, 1 extended depth of focus, and 4 multifocal IOLs were evaluated. An off-the-shelf optical bench was used to simulate a distant (>50 m) car headlight and record images. A custom HDR system was constructed using an imaging photometer to simulate headlight images and to measure quantitative halo luminance data. A metric was developed to characterize halo luminance properties. Clinical relevance was investigated by correlating halo measurements to visual outcomes questionnaire data. Results The Optikos system produced halo images useful for visual comparisons; however, measurements were relative and not quantitative. The HDR halo system provided objective and quantitative measurements used to create a metric from the area under the curve (AUC) of the logarithmic normalized halo profile. This proposed metric differentiated between IOL models, and linear regression analysis found strong correlations between AUC and subjective clinical ratings of halos. Conclusion The HDR system produced quantitative, preclinical metrics that correlated to patients’ subjective perception of halos. PMID:29503526
Visual tuning and metrical perception of realistic point-light dance movements
Su, Yi-Huang
2016-01-01
Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals’ preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory ‘subdivision effect’, suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception. PMID:26947252
Visual just noticeable differences
NASA Astrophysics Data System (ADS)
Nankivil, Derek; Chen, Minghan; Wooley, C. Benjamin
2018-02-01
A visual just noticeable difference (VJND) is the amount of change in either an image (e.g. a photographic print) or in vision (e.g. due to a change in refractive power of a vision correction device or visually coupled optical system) that is just noticeable when compared with the prior state. Numerous theoretical and clinical studies have been performed to determine the amount of change in various visual inputs (power, spherical aberration, astigmatism, etc.) that result in a just noticeable visual change. Each of these approaches, in defining a VJND, relies on the comparison of two visual stimuli. The first stimulus is the nominal or baseline state and the second is the perturbed state that results in a VJND. Using this commonality, we converted each result to the change in the area of the modulation transfer function (AMTF) to provide a more fundamental understanding of what results in a VJND. We performed an analysis of the wavefront criteria from basic optics, the image quality metrics, and clinical studies testing various visual inputs, showing that fractional changes in AMTF resulting in one VJND range from 0.025 to 0.075. In addition, cycloplegia appears to desensitize the human visual system so that a much larger change in the retinal image is required to give a VJND. This finding may be of great import for clinical vision tests. Finally, we present applications of the VJND model for the determination of threshold ocular aberrations and manufacturing tolerances of visually coupled optical systems.
Interactive visual exploration and refinement of cluster assignments.
Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R
2017-09-12
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.
Application of the EM algorithm to radiographic images.
Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J
1992-01-01
The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.
Spatial-temporal distortion metric for in-service quality monitoring of any digital video system
NASA Astrophysics Data System (ADS)
Wolf, Stephen; Pinson, Margaret H.
1999-11-01
Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of (1) spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression, (2) the angular direction of the spatial gradient, (3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.
Pressure-specific and multiple pressure response of fish assemblages in European running waters☆
Schinegger, Rafaela; Trautwein, Clemens; Schmutz, Stefan
2013-01-01
We classified homogenous river types across Europe and searched for fish metrics qualified to show responses to specific pressures (hydromorphological pressures or water quality pressures) vs. multiple pressures in these river types. We analysed fish taxa lists from 3105 sites in 16 ecoregions and 14 countries. Sites were pre-classified for 15 selected pressures to separate unimpacted from impacted sites. Hierarchical cluster analysis was used to split unimpacted sites into four homogenous river types based on species composition and geographical location. Classification trees were employed to predict associated river types for impacted sites with four environmental variables. We defined a set of 129 candidate fish metrics to select the best reacting metrics for each river type. The candidate metrics represented tolerances/intolerances of species associated with six metric types: habitat, migration, water quality sensitivity, reproduction, trophic level and biodiversity. The results showed that 17 uncorrelated metrics reacted to pressures in the four river types. Metrics responded specifically to water quality pressures and hydromorphological pressures in three river types and to multiple pressures in all river types. Four metrics associated with water quality sensitivity showed a significant reaction in up to three river types, whereas 13 metrics were specific to individual river types. Our results contribute to the better understanding of fish assemblage response to human pressures at a pan-European scale. The results are especially important for European river management and restoration, as it is necessary to uncover underlying processes and effects of human pressures on aquatic communities. PMID:24003262
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
Up Periscope! Designing a New Perceptual Metric for Imaging System Performance
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2016-01-01
Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.
Quality of service routing in the differentiated services framework
NASA Astrophysics Data System (ADS)
Oliveira, Marilia C.; Melo, Bruno; Quadros, Goncalo; Monteiro, Edmundo
2001-02-01
In this paper we present a quality of service routing strategy for network where traffic differentiation follows the class-based paradigm, as in the Differentiated Services framework. This routing strategy is based on a metric of quality of service. This metric represents the impact that delay and losses verified at each router in the network have in application performance. Based on this metric, it is selected a path for each class according to the class sensitivity to delay and losses. The distribution of the metric is triggered by a relative criterion with two thresholds, and the values advertised are the moving average of the last values measured.
Ji, Xiaonan; Machiraju, Raghu; Ritter, Alan; Yen, Po-Yin
2015-01-01
Systematic reviews (SRs) provide high quality evidence for clinical practice, but the article screening process is time and labor intensive. As SRs aim to identify relevant articles with a specific scope, we propose that a pre-defined article relationship, using similarity metrics, could accelerate this process. In this study, we established the article relationship using MEDLINE element similarities and visualized the article network with the Force Atlas layout. We also analyzed the article networks with graph diameter, closeness centrality, and module classes. The results revealed the distribution of articles and found that included articles tended to aggregate together in some module classes, providing further evidence of the existence of strong relationships among included articles. This approach can be utilized to facilitate the articles selection process through early identification of these dominant module classes. We are optimistic that the use of article network visualization can help better SR work prioritization.
Ji, Xiaonan; Machiraju, Raghu; Ritter, Alan; Yen, Po-Yin
2015-01-01
Systematic reviews (SRs) provide high quality evidence for clinical practice, but the article screening process is time and labor intensive. As SRs aim to identify relevant articles with a specific scope, we propose that a pre-defined article relationship, using similarity metrics, could accelerate this process. In this study, we established the article relationship using MEDLINE element similarities and visualized the article network with the Force Atlas layout. We also analyzed the article networks with graph diameter, closeness centrality, and module classes. The results revealed the distribution of articles and found that included articles tended to aggregate together in some module classes, providing further evidence of the existence of strong relationships among included articles. This approach can be utilized to facilitate the articles selection process through early identification of these dominant module classes. We are optimistic that the use of article network visualization can help better SR work prioritization. PMID:26958292
Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev
2010-01-01
Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming distance using the learned binary representation. A boosting algorithm is presented to efficiently learn the distance function. We evaluate the proposed algorithm on a mammographic image reference library with an Interactive Search-Assisted Decision Support (ISADS) system and on the medical image data set from ImageCLEF. Our results show that the boosting framework compares favorably to state-of-the-art approaches for distance metric learning in retrieval accuracy, with much lower computational cost. Additional evaluation with the COREL collection shows that our algorithm works well for regular image data sets.
Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten
2017-01-01
To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience. Statistical DVH offers an easy-to-read, detailed, and comprehensive way to visualize the quantitative comparison with historical experiences and among institutions. WES and GEM metrics offer a flexible means of incorporating discrete threshold-prioritizations and historic context into a set of standardized scoring metrics. Together, they provide a practical approach for incorporating big data into clinical practice for treatment plan evaluations.
Detection and quantification of flow consistency in business process models.
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara
2018-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.
Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.
2016-01-01
We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.
Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan
2016-01-01
Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.
Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.
Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan
2016-12-14
Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.
NASA Astrophysics Data System (ADS)
Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan
2015-06-01
Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
Safety considerations in providing allergen immunotherapy in the office.
Mattos, Jose L; Lee, Stella
2016-06-01
This review highlights the risks of allergy immunotherapy, methods to improve the quality and safety of allergy treatment, the current status of allergy quality metrics, and the future of quality measurement. In the current healthcare environment, the emphasis on outcomes measurement is increasing, and providers must be better equipped in the development, measurement, and reporting of safety and quality measures. Immunotherapy offers the only potential cure for allergic disease and asthma. Although well tolerated and effective, immunotherapy can be associated with serious consequence, including anaphylaxis and death. Many predisposing factors and errors that lead to serious systemic reactions are preventable, and the evaluation and implementation of quality measures are crucial to developing a safe immunotherapy practice. Although quality metrics for immunotherapy are in their infancy, they will become increasingly sophisticated, and providers will face increased pressure to deliver safe, high-quality, patient-centered, evidence-based, and efficient allergy care. The establishment of safety in the allergy office involves recognition of potential risk factors for anaphylaxis, the development and measurement of quality metrics, and changing systems-wide practices if needed. Quality improvement is a continuous process, and although national allergy-specific quality metrics do not yet exist, they are in development.
Validation of a Quality Management Metric
2000-09-01
quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verification and validation of the metric compared the QMM score to an overall program success score for the entire program and yielded positive correlation. The results of applying the QMM can be used to characterize the quality of software management and can serve as a template to improve software management performance. Future work includes further refining the QMM, applying the QMM scores to provide feedback
MASTtreedist: visualization of tree space based on maximum agreement subtree.
Huang, Hong; Li, Yongji
2013-01-01
Phylogenetic tree construction process might produce many candidate trees as the "best estimates." As the number of constructed phylogenetic trees grows, the need to efficiently compare their topological or physical structures arises. One of the tree comparison's software tools, the Mesquite's Tree Set Viz module, allows the rapid and efficient visualization of the tree comparison distances using multidimensional scaling (MDS). Tree-distance measures, such as Robinson-Foulds (RF), for the topological distance among different trees have been implemented in Tree Set Viz. New and sophisticated measures such as Maximum Agreement Subtree (MAST) can be continuously built upon Tree Set Viz. MAST can detect the common substructures among trees and provide more precise information on the similarity of the trees, but it is NP-hard and difficult to implement. In this article, we present a practical tree-distance metric: MASTtreedist, a MAST-based comparison metric in Mesquite's Tree Set Viz module. In this metric, the efficient optimizations for the maximum weight clique problem are applied. The results suggest that the proposed method can efficiently compute the MAST distances among trees, and such tree topological differences can be translated as a scatter of points in two-dimensional (2D) space. We also provide statistical evaluation of provided measures with respect to RF-using experimental data sets. This new comparison module provides a new tree-tree pairwise comparison metric based on the differences of the number of MAST leaves among constructed phylogenetic trees. Such a new phylogenetic tree comparison metric improves the visualization of taxa differences by discriminating small divergences of subtree structures for phylogenetic tree reconstruction.
Compression performance comparison in low delay real-time video for mobile applications
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2012-10-01
This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.
ERIC Educational Resources Information Center
Brochard, Renaud; Tassin, Maxime; Zagar, Daniel
2013-01-01
The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…
No-reference image quality assessment for horizontal-path imaging scenarios
NASA Astrophysics Data System (ADS)
Rios, Carlos; Gladysz, Szymon
2013-05-01
There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.
Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.
Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi
2018-04-01
In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.
Visual acuity estimation from simulated images
NASA Astrophysics Data System (ADS)
Duncan, William J.
Simulated images can provide insight into the performance of optical systems, especially those with complicated features. Many modern solutions for presbyopia and cataracts feature sophisticated power geometries or diffractive elements. Some intraocular lenses (IOLs) arrive at multifocality through the use of a diffractive surface and multifocal contact lenses have a radially varying power profile. These type of elements induce simultaneous vision as well as affecting vision much differently than a monofocal ophthalmic appliance. With myriad multifocal ophthalmics available on the market it is difficult to compare or assess performance in ways that effect wearers of such appliances. Here we present software and algorithmic metrics that can be used to qualitatively and quantitatively compare ophthalmic element performance, with specific examples of bifocal intraocular lenses (IOLs) and multifocal contact lenses. We anticipate this study, methods, and results to serve as a starting point for more complex models of vision and visual acuity in a setting where modeling is advantageous. Generating simulated images of real- scene scenarios is useful for patients in assessing vision quality with a certain appliance. Visual acuity estimation can serve as an important tool for manufacturing and design of ophthalmic appliances.
Blew, Robert M; Lee, Vinson R; Farr, Joshua N; Schiferl, Daniel J; Going, Scott B
2014-02-01
Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale and (2) establish a quantitative motion assessment methodology. Scans were performed on 506 healthy girls (9-13 years) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n = 46) was examined to determine %Move's impact on bone parameters. Agreement between measurers was strong (intraclass correlation coefficient = 0.732 for tibia, 0.812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat and no repeat. The quantitative approach found ≥95% of subjects had %Move <25 %. Comparison of initial and repeat scans by groups above and below 25% initial movement showed significant differences in the >25 % grouping. A pQCT visual inspection scale can be a reliable metric of image quality, but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat.
Blew, Robert M.; Lee, Vinson R.; Farr, Joshua N.; Schiferl, Daniel J.; Going, Scott B.
2013-01-01
Purpose Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale, and (2) establish a quantitative motion assessment methodology. Methods Scans were performed on 506 healthy girls (9–13yr) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n=46) was examined to determine %Move’s impact on bone parameters. Results Agreement between measurers was strong (ICC = .732 for tibia, .812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat or no repeat. The quantitative approach found ≥95% of subjects had %Move <25%. Comparison of initial and repeat scans by groups above and below 25% initial movement, showed significant differences in the >25% grouping. Conclusions A pQCT visual inspection scale can be a reliable metric of image quality but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat. PMID:24077875
Software Quality Assurance Metrics
NASA Technical Reports Server (NTRS)
McRae, Kalindra A.
2004-01-01
Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Assessment of rural soundscapes with high-speed train noise.
Lee, Pyoung Jik; Hong, Joo Young; Jeon, Jin Yong
2014-06-01
In the present study, rural soundscapes with high-speed train noise were assessed through laboratory experiments. A total of ten sites with varying landscape metrics were chosen for audio-visual recording. The acoustical characteristics of the high-speed train noise were analyzed using various noise level indices. Landscape metrics such as the percentage of natural features (NF) and Shannon's diversity index (SHDI) were adopted to evaluate the landscape features of the ten sites. Laboratory experiments were then performed with 20 well-trained listeners to investigate the perception of high-speed train noise in rural areas. The experiments consisted of three parts: 1) visual-only condition, 2) audio-only condition, and 3) combined audio-visual condition. The results showed that subjects' preference for visual images was significantly related to NF, the number of land types, and the A-weighted equivalent sound pressure level (LAeq). In addition, the visual images significantly influenced the noise annoyance, and LAeq and NF were the dominant factors affecting the annoyance from high-speed train noise in the combined audio-visual condition. In addition, Zwicker's loudness (N) was highly correlated with the annoyance from high-speed train noise in both the audio-only and audio-visual conditions. © 2013.
Liu, Chun Li; Xu, Yue Quan; Wu, Hui; Chen, Si Si; Guo, Ji Jun
2013-11-25
Citation counts for peer-reviewed articles and the impact factor of journals have long been indicators of article importance or quality. In the Web 2.0 era, growing numbers of scholars are using scholarly social network tools to communicate scientific ideas with colleagues, thereby making traditional indicators less sufficient, immediate, and comprehensive. In these new situations, the altmetric indicators offer alternative measures that reflect the multidimensional nature of scholarly impact in an immediate, open, and individualized way. In this direction of research, some studies have demonstrated the correlation between altmetrics and traditional metrics with different samples. However, up to now, there has been relatively little research done on the dimension and interaction structure of altmetrics. Our goal was to reveal the number of dimensions that altmetric indicators should be divided into and the structure in which altmetric indicators interact with each other. Because an article-level metrics dataset is collected from scholarly social media and open access platforms, it is one of the most robust samples available to study altmetric indicators. Therefore, we downloaded a large dataset containing activity data in 20 types of metrics present in 33,128 academic articles from the application programming interface website. First, we analyzed the correlation among altmetric indicators using Spearman rank correlation. Second, we visualized the multiple correlation coefficient matrixes with graduated colors. Third, inputting the correlation matrix, we drew an MDS diagram to demonstrate the dimension for altmetric indicators. For correlation structure, we used a social network map to represent the social relationships and the strength of relations. We found that the distribution of altmetric indicators is significantly non-normal and positively skewed. The distribution of downloads and page views follows the Pareto law. Moreover, we found that the Spearman coefficients from 91.58% of the pairs of variables indicate statistical significance at the .01 level. The non-metric MDS map divided the 20 altmetric indicators into three clusters: traditional metrics, active altmetrics, and inactive altmetrics. The social network diagram showed two subgroups that are tied to each other but not to other groups, thus indicating an intersection between altmetrics and traditional metric indicators. Altmetrics complement, and most correlate significantly with, traditional measures. Therefore, in future evaluations of the social impact of articles, we should consider not only traditional metrics but also active altmetrics. There may also be a transfer phenomenon for the social impact of academic articles. The impact transfer path has transfer, or intermediate, stations that transport and accelerate article social impact from active altmetrics to traditional metrics and vice versa. This discovery will be helpful to explain the impact transfer mechanism of articles in the Web 2.0 era. Hence, altmetrics are in fact superior to traditional filters for assessing scholarly impact in multiple dimensions and in terms of social structure.
Developing and Evaluating a Target-Background Similarity Metric for Camouflage Detection
Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong
2014-01-01
Background Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. Methodology In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. Significance The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. PMID:24498310
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Raney, David L.; Glaab, Louis J.; Derry, Stephen D.
2002-01-01
An assessment of a proposed configuration of a high-speed civil transport was conducted by using NASA and industry research pilots. The assessment was conducted to evaluate operational aspects of the configuration from a pilot's perspective, with the primary goal being to identify potential deficiencies in the configuration. The configuration was evaluated within and at the limits of the design operating envelope to determine the suitability of the configuration to maneuver in a typical mission as well as in emergency or envelope-limit conditions. The Cooper-Harper rating scale was used to evaluate the flying qualities of the configuration. A summary flying qualities metric was also calculated. The assessment was performed in the Langley six-degree-of-freedom Visual Motion Simulator. The effect of a restricted cockpit field-of-view due to obstruction by the vehicle nose was not included in this study. Tasks include landings, takeoffs, climbs, descents, overspeeds, coordinated turns, and recoveries from envelope limit excursions. Emergencies included engine failures, loss of stability augmentation, engine inlet unstarts, and emergency descents. Minimum control speeds and takeoff decision, rotation, and safety speeds were also determined.
Supporting Research Impact Metrics in Academic Libraries: A Case Study
ERIC Educational Resources Information Center
Braun, Steven
2017-01-01
Measuring research impact has become a nearly ubiquitous facet of scholarly communication. At the University of Minnesota Medical School, new administrative directives have directly tied impact metrics to faculty assessment, promotion, and tenure. In this paper, I describe a platform for the analysis and visualization of research impact that was…
Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.
Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N
2017-05-01
Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies inherent to traditional consultation models, novel productivity metrics are proposed. Further research is needed to determine optimal metrics for monitoring productivity within PPC teams. Innovative approaches should be studied with the goal of improving efficiency of care without compromising value. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis
2015-11-01
Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data
Carroll, Thomas S.; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines
2014-01-01
With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency. PMID:24782889
Comparative performance evaluation of a new a-Si EPID that exceeds quad high-definition resolution.
McConnell, Kristen A; Alexandrian, Ara; Papanikolaou, Niko; Stathakis, Sotiri
2018-01-01
Electronic portal imaging devices (EPIDs) are an integral part of the radiation oncology workflow for treatment setup verification. Several commercial EPID implementations are currently available, each with varying capabilities. To standardize performance evaluation, Task Group Report 58 (TG-58) and TG-142 outline specific image quality metrics to be measured. A LinaTech Image Viewing System (IVS), with the highest commercially available pixel matrix (2688x2688 pixels), was independently evaluated and compared to an Elekta iViewGT (1024x1024 pixels) and a Varian aSi-1000 (1024x768 pixels) using a PTW EPID QC Phantom. The IVS, iViewGT, and aSi-1000 were each used to acquire 20 images of the PTW QC Phantom. The QC phantom was placed on the couch and aligned at isocenter. The images were exported and analyzed using the epidSoft image quality assurance (QA) software. The reported metrics were signal linearity, isotropy of signal linearity, signal-tonoise ratio (SNR), low contrast resolution, and high-contrast resolution. These values were compared between the three EPID solutions. Computed metrics demonstrated comparable results between the EPID solutions with the IVS outperforming the aSi-1000 and iViewGT in the low and high-contrast resolution analysis. The performance of three commercial EPID solutions have been quantified, evaluated, and compared using results from the PTW QC Phantom. The IVS outperformed the other panels in low and high-contrast resolution, but to fully realize the benefits of the IVS, the selection of the monitor on which to view the high-resolution images is important to prevent down sampling and visual of resolution.
NASA Astrophysics Data System (ADS)
Choi, Young-In; Ahn, Jaemyung
2018-04-01
Earned value management (EVM) is a methodology for monitoring and controlling the performance of a project based on a comparison between planned and actual cost/schedule. This study proposes a concept of hybrid earned value management (H-EVM) that integrates the traditional EVM metrics with information on the technology readiness level. The proposed concept can reflect the progress of a project in a sensitive way and provides short-term perspective complementary to the traditional EVM metrics. A two-dimensional visualization on the cost/schedule status of a project reflecting both of the traditional EVM (long-term perspective) and the proposed H-EVM (short-term perspective) indices is introduced. A case study on the management of a new space launch vehicle development program is conducted to demonstrate the effectiveness of the proposed H-EVM concept, associated metrics, and the visualization technique.
75 FR 5040 - Extension of Period for Comments on Enhancement in the Quality of Patents
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... patents, to identify appropriate indicia of quality, and to establish metrics for the measurement of the... issued patents, to identify appropriate indicia of quality, and to establish metrics for the measurement.... Kappos, Under Secretary of Commerce for Intellectual Property and Director of the United States Patent...
Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M
2018-06-01
Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Modeling human comprehension of data visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie
This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less
Assessing the quality of restored images in optical long-baseline interferometry
NASA Astrophysics Data System (ADS)
Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric
2017-03-01
Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.
Structural texture similarity metrics for image analysis and retrieval.
Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L
2013-07-01
We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
Hall, Lenwood W; Killen, William D
2006-01-01
This study was designed to assess trends in physical habitat and benthic communities (macroinvertebrates) annually in two agricultural streams (Del Puerto Creek and Salt Slough) in California's San Joaquin Valley from 2001 to 2005, determine the relationship between benthic communities and both water quality and physical habitat from both streams over the 5-year period, and compare benthic communities and physical habitat in both streams from 2001 to 2005. Physical habitat, measured with 10 metrics and a total score, was reported to be fairly stable over 5 years in Del Puerto Creek but somewhat variable in Salt Slough. Benthic communities, measured with 18 metrics, were reported to be marginally variable over time in Del Puerto Creek but fairly stable in Salt Slough. Rank correlation analysis for both water bodies combined showed that channel alteration, embeddedness, riparian buffer, and velocity/depth/diversity were the most important physical habitat metrics influencing the various benthic metrics. Correlations of water quality parameters and benthic community metrics for both water bodies combined showed that turbidity, dissolved oxygen, and conductivity were the most important water quality parameters influencing the different benthic metrics. A comparison of physical habitat metrics (including total score) for both water bodies over the 5-year period showed that habitat metrics were more positive in Del Puerto Creek when compared to Salt Slough. A comparison of benthic metrics in both water bodies showed that approximately one-third of the metrics were significantly different between the two water bodies. Generally, the more positive benthic metric scores were reported in Del Puerto Creek, which suggests that the communities in this creek are more robust than Salt Slough.
Carter, James L.; Resh, Vincent H.
2013-01-01
Biomonitoring programs based on benthic macroinvertebrates are well-established worldwide. Their value, however, depends on the appropriateness of the analytical techniques used. All United States State, benthic macroinvertebrate biomonitoring programs were surveyed regarding the purposes of their programs, quality-assurance and quality-control procedures used, habitat and water-chemistry data collected, treatment of macroinvertebrate data prior to analysis, statistical methods used, and data-storage considerations. State regulatory mandates (59 percent of programs), biotic index development (17 percent), and Federal requirements (15 percent) were the most frequently reported purposes of State programs, with the specific tasks of satisfying the requirements for 305b/303d reports (89 percent), establishment and monitoring of total maximum daily loads, and developing biocriteria being the purposes most often mentioned. Most states establish reference sites (81 percent), but classify them using State-specific methods. The most often used technique for determining the appropriateness of a reference site was Best Professional Judgment (86 percent of these states). Macroinvertebrate samples are almost always collected by using a D-frame net, and duplicate samples are collected from approximately 10 percent of sites for quality assurance and quality control purposes. Most programs have macroinvertebrate samples processed by contractors (53 percent) and have identifications confirmed by a second taxonomist (85 percent). All States collect habitat data, with most using the Rapid Bioassessment Protocol visual-assessment approach, which requires ~1 h/site. Dissolved oxygen, pH, and conductivity are measured in more than 90 percent of programs. Wide variation exists in which taxa are excluded from analyses and the level of taxonomic resolution used. Species traits, such as functional feeding groups, are commonly used (96 percent), as are tolerance values for organic pollution (87 percent). Less often used are tolerance values for metals (28 percent). Benthic data are infrequently modified (34 percent) prior to analysis. Fixed-count subsampling is used widely (83 percent), with the number of organisms sorted ranging from 100 to 600 specimens. Most programs include a step during sample processing to acquire rare taxa (79 percent). Programs calculate from 2 to more than100 different metrics (mean 20), and most formulate a multimetric index (87 percent). Eleven of the 112 metrics reported represent 50 percent of all metrics considered to be useful, and most of these are based on richness or percent composition. Biotic indices and tolerance metrics are most oftenused in the eastern U.S., and functional and habitat-type metrics are most often used in the western U.S. Sixty-nine percent of programs analyze their data in-house, typically performing correlations and regressions, and few use any form of data transformation (34 percent). Fifty-one percent of the programs use multivariate analyses, typically non-metric multi-dimensional scaling. All programs have electronic data storage. Most programs use the Integrated Taxonomic Information System (75 percent) for nomenclature and to update historical data (78 percent). State procedures represent a diversity of biomonitoring approaches which likely compromises comparability among programs. A national-state consensus is needed for: (1) developing methods for the identification of reference conditions and reference sites, (2) standardization in determining and reporting species richness, (3) testing and documenting both the theoretical and mechanistic basis of often-used metrics, (4) development of properly replicated point-source study designs, and (5) curation of benthic macroinvertebrate data, including reference and voucher collections, for successful evaluation of future environmental changes.
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Enhancing security of fingerprints through contextual biometric watermarking.
Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M
2007-07-04
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.
Understanding Acceptance of Software Metrics--A Developer Perspective
ERIC Educational Resources Information Center
Umarji, Medha
2009-01-01
Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…
Semantic Metrics for Analysis of Software
NASA Technical Reports Server (NTRS)
Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara
2005-01-01
A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.
High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.
Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre
2017-06-03
Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.
Towards the XML schema measurement based on mapping between XML and OO domain
NASA Astrophysics Data System (ADS)
Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja
2017-07-01
Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.
NASA Astrophysics Data System (ADS)
Davoudi, Bahar; Damodaran, Vani; Bizheva, Kostadinka; Yang, Victor; Dinniwell, Robert; Levin, Wilfred; Vitkin, Alex
2013-03-01
Late oral radiation toxicity is a common condition occurring in a considerable percentage of head and neck cancer patients after radiation therapy which reduces their quality of life. The current examination of these patients is based on a visual inspection of the surface of the oral cavity; however, it is well known that many of the complications start in the subsurface layers before any superficial manifestation. Considering the currently suboptimal examination techniques, we address this clinical problem by using optical coherence tomography (OCT) to monitor the subsurface oral layers with micron-scale resolution images. A spectral-domain OCT system and a specialized oral imaging probe were designed and built for a clinical study to image late oral radiation toxicity patients. In addition to providing qualitative 2D and 3D images of the subsurface oral layers, quantitative metrics were developed to assess the back-scattering and thickness properties of different layers. Metric derivations are explained and preliminary results from late radiation toxicity patients and healthy volunteers are presented and discussed.
Sources of global climate data and visualization portals
Douglas, David C.
2014-01-01
Climate is integral to the geophysical foundation upon which ecosystems are structured. Knowledge about mechanistic linkages between the geophysical and biological environments is essential for understanding how global warming may reshape contemporary ecosystems and ecosystem services. Numerous global data sources spanning several decades are available that document key geophysical metrics such as temperature and precipitation, and metrics of primary biological production such as vegetation phenology and ocean phytoplankton. This paper provides an internet directory to portals for visualizing or servers for downloading many of the more commonly used global datasets, as well as a description of how to write simple computer code to efficiently retrieve these data. The data are broadly useful for quantifying relationships between climate, habitat availability, and lower-trophic-level habitat quality - especially in Arctic regions where strong seasonality is accompanied by intrinsically high year-to-year variability. If defensible linkages between the geophysical (climate) and the biological environment can be established, general circulation model (GCM) projections of future climate conditions can be used to infer future biological responses. Robustness of this approach is, however, complicated by the number of direct, indirect, or interacting linkages involved. For example, response of a predator species to climate change will be influenced by the responses of its prey and competitors, and so forth throughout a trophic web. The complexities of ecological systems warrant sensible and parsimonious approaches for assessing and establishing the role of natural climate variability in order to substantiate inferences about the potential effects of global warming.
Quality of Information Approach to Improving Source Selection in Tactical Networks
2017-02-01
consider the performance of this process based on metrics relating to quality of information: accuracy, timeliness, completeness and reliability. These...that are indicators of that the network is meeting these quality requirements. We study effective data rate, social distance, link integrity and the...utility of information as metrics within a multi-genre network to determine the quality of information of its available sources. This paper proposes a
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
NASA Astrophysics Data System (ADS)
Ciaramello, Francis M.; Hemami, Sheila S.
2007-02-01
For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.
Bunch, K J; Allin, B; Jolly, M; Hardie, T; Knight, M
2018-05-16
To develop a core metric set to monitor the quality of maternity care. Delphi process followed by a face-to-face consensus meeting. English maternity units. Three representative expert panels: service designers, providers and users. Maternity care metrics judged important by participants. Participants were asked to complete a two-phase Delphi process, scoring metrics from existing local maternity dashboards. A consensus meeting discussed the results and re-scored the metrics. In all, 125 distinct metrics across six domains were identified from existing dashboards. Following the consensus meeting, 14 metrics met the inclusion criteria for the final core set: smoking rate at booking; rate of birth without intervention; caesarean section delivery rate in Robson group 1 women; caesarean section delivery rate in Robson group 2 women; caesarean section delivery rate in Robson group 5 women; third- and fourth-degree tear rate among women delivering vaginally; rate of postpartum haemorrhage of ≥1500 ml; rate of successful vaginal birth after a single previous caesarean section; smoking rate at delivery; proportion of babies born at term with an Apgar score <7 at 5 minutes; proportion of babies born at term admitted to the neonatal intensive care unit; proportion of babies readmitted to hospital at <30 days of age; breastfeeding initiation rate; and breastfeeding rate at 6-8 weeks. Core outcome set methodology can be used to incorporate the views of key stakeholders in developing a core metric set to monitor the quality of care in maternity units, thus enabling improvement. Achieving consensus on core metrics for monitoring the quality of maternity care. © 2018 The Authors. BJOG: An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.
Seismic Data Archive Quality Assurance -- Analytics Adding Value at Scale
NASA Astrophysics Data System (ADS)
Casey, R. E.; Ahern, T. K.; Sharer, G.; Templeton, M. E.; Weertman, B.; Keyson, L.
2015-12-01
Since the emergence of real-time delivery of seismic data over the last two decades, solutions for near-real-time quality analysis and station monitoring have been developed by data producers and data stewards. This has allowed for a nearly constant awareness of the quality of the incoming data and the general health of the instrumentation around the time of data capture. Modern quality assurance systems are evolving to provide ready access to a large variety of metrics, a rich and self-correcting history of measurements, and more importantly the ability to access these quality measurements en-masse through a programmatic interface.The MUSTANG project at the IRIS Data Management Center is working to achieve 'total archival data quality', where a large number of standardized metrics, some computationally expensive, are generated and stored for all data from decades past to the near present. To perform this on a 300 TB archive of compressed time series requires considerable resources in network I/O, disk storage, and CPU capacity to achieve scalability, not to mention the technical expertise to develop and maintain it. In addition, staff scientists are necessary to develop the system metrics and employ them to produce comprehensive and timely data quality reports to assist seismic network operators in maintaining their instrumentation. All of these metrics must be available to the scientist 24/7.We will present an overview of the MUSTANG architecture including the development of its standardized metrics code in R. We will show examples of the metrics values that we make publicly available to scientists and educators and show how we are sharing the algorithms used. We will also discuss the development of a capability that will enable scientific researchers to specify data quality constraints on their requests for data, providing only the data that is best suited to their area of study.
Pre-processing, registration and selection of adaptive optics corrected retinal images.
Ramaswamy, Gomathy; Devaney, Nicholas
2013-07-01
In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased sharpness over most of the field of view. Adaptive optics assisted images of the cone photoreceptors can be better pre-processed using a wavelet approach. These images can be assessed for image quality using a 'Designer Metric'. Two-stage image registration including correcting for rotation significantly improves the final image contrast and sharpness. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.
Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin
2018-03-01
The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Comparison of macroinvertebrate-derived stream quality metrics between snag and riffle habitats
Stepenuck, K.F.; Crunkilton, R.L.; Bozek, Michael A.; Wang, L.
2008-01-01
We compared benthic macroinvertebrate assemblage structure at snag and riffle habitats in 43 Wisconsin streams across a range of watershed urbanization using a variety of stream quality metrics. Discriminant analysis indicated that dominant taxa at riffles and snags differed; Hydropsychid caddisflies (Hydropsyche betteni and Cheumatopsyche spp.) and elmid beetles (Optioservus spp. and Stenemlis spp.) typified riffles, whereas isopods (Asellus intermedius) and amphipods (Hyalella azteca and Gammarus pseudolimnaeus) predominated in snags. Analysis of covariance indicated that samples from snag and riffle habitats differed significantly in their response to the urbanization gradient for the Hilsenhoff biotic index (BI), Shannon's diversity index, and percent of filterers, shredders, and pollution intolerant Ephemeroptera, Plecoptera, and Trichoptera (EPT) at each stream site (p ??? 0.10). These differences suggest that although macroinvertebrate assemblages present in either habitat type are sensitive to detecting the effects of urbanization, metrics derived from different habitats should not be intermixed when assessing stream quality through biomonitoring. This can be a limitation to resource managers who wish to compare water quality among streams where the same habitat type is not available at all stream locations, or where a specific habitat type (i.e., a riffle) is required to determine a metric value (i.e., BI). To account for differences in stream quality at sites lacking riffle habitat, snag-derived metric values can be adjusted based on those obtained from riffles that have been exposed to the same level of urbanization. Comparison of nonlinear regression equations that related stream quality metric values from the two habitat types to percent watershed urbanization indicated that snag habitats had on average 30.2 fewer percent EPT individuals, a lower diversity index value than riffles, and a BI value of 0.29 greater than riffles. ?? 2008 American Water Resources Association.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less
Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward
2016-01-01
Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
State of the art metrics for aspect oriented programming
NASA Astrophysics Data System (ADS)
Ghareb, Mazen Ismaeel; Allen, Gary
2018-04-01
The quality evaluation of software, e.g., defect measurement, gains significance with higher use of software applications. Metric measurements are considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality indicators for novel development approaches such as Aspect Oriented Programming (AOP). AOP intends to enhance programming quality, by providing new and novel constructs for the development of systems, for example, point cuts, advice and inter-type relationships. Hence, it is not evident if quality pointers for AOP can be derived from direct expansions of traditional OO measurements. Then again, investigations of AOP do regularly depend on established coupling measurements. Notwithstanding the late reception of AOP in empirical studies, coupling measurements have been adopted as useful markers of flaw inclination in this context. In this paper we will investigate the state of the art metrics for measurement of Aspect Oriented systems development.
NASA Astrophysics Data System (ADS)
Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan
2018-03-01
Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.
Wiele, Stephen M.; Brasher, Anne M.D.; Miller, Matthew P.; May, Jason T.; Carpenter, Kurt D.
2012-01-01
The U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program was established by Congress in 1991 to collect long-term, nationally consistent information on the quality of the Nation's streams and groundwater. The NAWQA Program utilizes interdisciplinary and dynamic studies that link the chemical and physical conditions of streams (such as flow and habitat) with ecosystem health and the biologic condition of algae, aquatic invertebrates, and fish communities. This report presents metrics derived from NAWQA data and the U.S. Geological Survey streamgaging network for sampling sites in the Western United States, as well as associated chemical, habitat, and streamflow properties. The metrics characterize the conditions of algae, aquatic invertebrates, and fish. In addition, we have compiled climate records and basin characteristics related to the NAWQA sampling sites. The calculated metrics and compiled data can be used to analyze ecohydrologic trends over time.
Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.
Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Multivariate Analyses of Quality Metrics for Crystal Structures in the PDB Archive.
Shao, Chenghua; Yang, Huanwang; Westbrook, John D; Young, Jasmine Y; Zardecki, Christine; Burley, Stephen K
2017-03-07
Following deployment of an augmented validation system by the Worldwide Protein Data Bank (wwPDB) partnership, the quality of crystal structures entering the PDB has improved. Of significance are improvements in quality measures now prominently displayed in the wwPDB validation report. Comparisons of PDB depositions made before and after introduction of the new reporting system show improvements in quality measures relating to pairwise atom-atom clashes, side-chain torsion angle rotamers, and local agreement between the atomic coordinate structure model and experimental electron density data. These improvements are largely independent of resolution limit and sample molecular weight. No significant improvement in the quality of associated ligands was observed. Principal component analysis revealed that structure quality could be summarized with three measures (Rfree, real-space R factor Z score, and a combined molecular geometry quality metric), which can in turn be reduced to a single overall quality metric readily interpretable by all PDB archive users. Copyright © 2017 Elsevier Ltd. All rights reserved.
A laser beam quality definition based on induced temperature rise.
Miller, Harold C
2012-12-17
Laser beam quality metrics like M(2) can be used to describe the spot sizes and propagation behavior of a wide variety of non-ideal laser beams. However, for beams that have been diffracted by limiting apertures in the near-field, or those with unusual near-field profiles, the conventional metrics can lead to an inconsistent or incomplete description of far-field performance. This paper motivates an alternative laser beam quality definition that can be used with any beam. The approach uses a consideration of the intrinsic ability of a laser beam profile to heat a material. Comparisons are made with conventional beam quality metrics. An analysis on an asymmetric Gaussian beam is used to establish a connection with the invariant beam propagation ratio.
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
NASA Astrophysics Data System (ADS)
Sutherland, Andrew B.; Culp, Joseph M.; Benoy, Glenn A.
2012-07-01
The objective of this study was to evaluate which macroinvertebrate and deposited sediment metrics are best for determining effects of excessive sedimentation on stream integrity. Fifteen instream sediment metrics, with the strongest relationship to land cover, were compared to riffle macroinvertebrate metrics in streams ranging across a gradient of land disturbance. Six deposited sediment metrics were strongly related to the relative abundance of Ephemeroptera, Plecoptera and Trichoptera and six were strongly related to the modified family biotic index (MFBI). Few functional feeding groups and habit groups were significantly related to deposited sediment, and this may be related to the focus on riffle, rather than reach-wide macroinvertebrates, as reach-wide sediment metrics were more closely related to human land use. Our results suggest that the coarse-level deposited sediment metric, visual estimate of fines, and the coarse-level biological index, MFBI, may be useful in biomonitoring efforts aimed at determining the impact of anthropogenic sedimentation on stream biotic integrity.
Sutherland, Andrew B; Culp, Joseph M; Benoy, Glenn A
2012-07-01
The objective of this study was to evaluate which macroinvertebrate and deposited sediment metrics are best for determining effects of excessive sedimentation on stream integrity. Fifteen instream sediment metrics, with the strongest relationship to land cover, were compared to riffle macroinvertebrate metrics in streams ranging across a gradient of land disturbance. Six deposited sediment metrics were strongly related to the relative abundance of Ephemeroptera, Plecoptera and Trichoptera and six were strongly related to the modified family biotic index (MFBI). Few functional feeding groups and habit groups were significantly related to deposited sediment, and this may be related to the focus on riffle, rather than reach-wide macroinvertebrates, as reach-wide sediment metrics were more closely related to human land use. Our results suggest that the coarse-level deposited sediment metric, visual estimate of fines, and the coarse-level biological index, MFBI, may be useful in biomonitoring efforts aimed at determining the impact of anthropogenic sedimentation on stream biotic integrity.
Metrics for comparison of crystallographic maps
Urzhumtsev, Alexandre; Afonine, Pavel V.; Lunin, Vladimir Y.; ...
2014-10-01
Numerical comparison of crystallographic contour maps is used extensively in structure solution and model refinement, analysis and validation. However, traditional metrics such as the map correlation coefficient (map CC, real-space CC or RSCC) sometimes contradict the results of visual assessment of the corresponding maps. This article explains such apparent contradictions and suggests new metrics and tools to compare crystallographic contour maps. The key to the new methods is rank scaling of the Fourier syntheses. The new metrics are complementary to the usual map CC and can be more helpful in map comparison, in particular when only some of their aspects,more » such as regions of high density, are of interest.« less
Identification of the ideal clutter metric to predict time dependence of human visual search
NASA Astrophysics Data System (ADS)
Cartier, Joan F.; Hsu, David H.
1995-05-01
The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.
Quantitative metrics for assessment of chemical image quality and spatial resolution
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
2016-02-28
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
Quantitative metrics for assessment of chemical image quality and spatial resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
Young, Laura K; Smithson, Hannah E
2014-01-01
There is evidence that letter identification is mediated by only a narrow band of spatial frequencies and that the center frequency of the neural channel thought to underlie this selectivity is related to the size of the letters. When letters are spatially filtered (at a fixed size) the channel tuning characteristics change according to the properties of the spatial filter (Majaj et al., 2002). Optical aberrations in the eye act to spatially filter the image formed on the retina-their effect is generally to attenuate high frequencies more than low frequencies but often in a non-monotonic way. We might expect the change in the spatial frequency spectrum caused by the aberration to predict the shift in channel tuning observed for aberrated letters. We show that this is not the case. We used critical-band masking to estimate channel-tuning in the presence of three types of aberration-defocus, coma and secondary astigmatism. We found that the maximum masking was shifted to lower frequencies in the presence of an aberration and that this result was not simply predicted by the spatial-frequency-dependent degradation in image quality, assessed via metrics that have previously been shown to correlate well with performance loss in the presence of an aberration. We show that if image quality effects are taken into account (using visual Strehl metrics), the neural channel required to model the data is shifted to lower frequencies compared to the control (no-aberration) condition. Additionally, we show that when spurious resolution (caused by π phase shifts in the optical transfer function) in the image is masked, the channel tuning properties for aberrated letters are affected, suggesting that there may be interference between visual channels. Even in the presence of simulated aberrations, whose properties change from trial-to-trial, observers exhibit flexibility in selecting the spatial frequencies that support letter identification.
Porter, Stephen D.
2008-01-01
Algae are excellent indicators of water-quality conditions, notably nutrient and organic enrichment, and also are indicators of major ion, dissolved oxygen, and pH concentrations and stream microhabitat conditions. The autecology, or physiological optima and tolerance, of algal species for various water-quality contaminants and conditions is relatively well understood for certain groups of freshwater algae, notably diatoms. However, applications of autecological information for water-quality assessments have been limited because of challenges associated with compiling autecological literature from disparate sources, tracking name changes for a large number of algal species, and creating an autecological data base from which algal-indicator metrics can be calculated. A comprehensive summary of algal autecological attributes for North American streams and rivers does not exist. This report describes a large, digital data file containing 28,182 records for 5,939 algal taxa, generally species or variety, collected by the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. The data file includes 37 algal attributes classified by over 100 algal-indicator codes or metrics that can be calculated easily with readily available software. Algal attributes include qualitative classifications based on European and North American autecological literature, and semi-quantitative, weighted-average regression approaches for estimating optima using regional and national NAWQA data. Applications of algal metrics in water-quality assessments are discussed and national quartile distributions of metric scores are shown for selected indicator metrics.
Internet Use and Cybersecurity Concerns of Individuals with Visual Impairments
ERIC Educational Resources Information Center
Inan, Fethi A.; Namin, Akbar S.; Pogrund, Rona L.; Jones, Keith S.
2016-01-01
Twenty individuals with visual impairments were surveyed in order to (a) understand their Internet use and (b) examine relations between metrics related to Internet use and cybersecurity-related knowledge, skills, confidence, and attitudes. Participants used the Internet for various purposes, including information search, communication, chatting,…
Spatial attention enhances the selective integration of activity from area MT.
Masse, Nicolas Y; Herrington, Todd M; Cook, Erik P
2012-09-01
Distinguishing which of the many proposed neural mechanisms of spatial attention actually underlies behavioral improvements in visually guided tasks has been difficult. One attractive hypothesis is that attention allows downstream neural circuits to selectively integrate responses from the most informative sensory neurons. This would allow behavioral performance to be based on the highest-quality signals available in visual cortex. We examined this hypothesis by asking how spatial attention affects both the stimulus sensitivity of middle temporal (MT) neurons and their corresponding correlation with behavior. Analyzing a data set pooled from two experiments involving four monkeys, we found that spatial attention did not appreciably affect either the stimulus sensitivity of the neurons or the correlation between their activity and behavior. However, for those sessions in which there was a robust behavioral effect of attention, focusing attention inside the neuron's receptive field significantly increased the correlation between these two metrics, an indication of selective integration. These results suggest that, similar to mechanisms proposed for the neural basis of perceptual learning, the behavioral benefits of focusing spatial attention are attributable to selective integration of neural activity from visual cortical areas by their downstream targets.
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
Kanbar, Lara J; Shalish, Wissam; Precup, Doina; Brown, Karen; Sant'Anna, Guilherme M; Kearney, Robert E
2017-07-01
In multi-disciplinary studies, different forms of data are often collected for analysis. For example, APEX, a study on the automated prediction of extubation readiness in extremely preterm infants, collects clinical parameters and cardiorespiratory signals. A variety of cardiorespiratory metrics are computed from these signals and used to assign a cardiorespiratory pattern at each time. In such a situation, exploratory analysis requires a visualization tool capable of displaying these different types of acquired and computed signals in an integrated environment. Thus, we developed APEX_SCOPE, a graphical tool for the visualization of multi-modal data comprising cardiorespiratory signals, automated cardiorespiratory metrics, automated respiratory patterns, manually classified respiratory patterns, and manual annotations by clinicians during data acquisition. This MATLAB-based application provides a means for collaborators to view combinations of signals to promote discussion, generate hypotheses and develop features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.
This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics
Roberts, James J.; Bruce, James F.; Zuellig, Robert E.
2018-01-08
The analysis described in this report is part of a longterm project monitoring the biological communities, habitat, and water quality of the Fountain Creek Basin. Biology, habitat, and water-quality data have been collected at 10 sites since 2003. These data include annual samples of aquatic invertebrate communities, fish communities, water quality, and quantitative riverine habitat. This report examines trends in biological communities from 2003 to 2016 and explores relationships between biological communities and abiotic variables (antecedent streamflow, physical habitat, and water quality). Six biological metrics (three invertebrate and three fish) and four individual fish species were used to examine trends in these data and how streamflow, habitat, and (or) water quality may explain these trends. The analysis of 79 trends shows that the majority of significant trends decreased over the trend period. Overall, 19 trends before adjustments for streamflow in the fish (12) and invertebrate (7) metrics were all decreasing except for the metric Invertebrate Species Richness at the most upstream site in Monument Creek. Seven of these trends were explained by streamflow and four trends were revealed that were originally masked by variability in antecedent streamflow. Only two sites (Jimmy Camp Creek at Fountain, CO and Fountain Creek near Pinon, CO) had no trends in the fish or invertebrate metrics. Ten of the streamflow-adjusted trends were explained by habitat, one was explained by water quality, and five were not explained by any of the variables that were tested. Overall, from 2003 to 2016, all the fish metric trends were decreasing with an average decline of 40 percent, and invertebrate metrics decreased on average by 9.5 percent. A potential peak streamflow threshold was identified above which there is severely limited production of age-0 flathead chub (Platygobio gracilis).
SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, S; Mehta, V
Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoringmore » function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the metrics. This will improve the safe delivery of large doses for these patients.« less
Map visualization of groundwater withdrawals at the sub-basin scale
NASA Astrophysics Data System (ADS)
Goode, Daniel J.
2016-06-01
A simple method is proposed to visualize the magnitude of groundwater withdrawals from wells relative to user-defined water-resource metrics. The map is solely an illustration of the withdrawal magnitudes, spatially centered on wells—it is not capture zones or source areas contributing recharge to wells. Common practice is to scale the size (area) of withdrawal well symbols proportional to pumping rate. Symbols are drawn large enough to be visible, but not so large that they overlap excessively. In contrast to such graphics-based symbol sizes, the proposed method uses a depth-rate index (length per time) to visualize the well withdrawal rates by volumetrically consistent areas, called "footprints". The area of each individual well's footprint is the withdrawal rate divided by the depth-rate index. For example, the groundwater recharge rate could be used as a depth-rate index to show how large withdrawals are relative to that recharge. To account for the interference of nearby wells, composite footprints are computed by iterative nearest-neighbor distribution of excess withdrawals on a computational and display grid having uniform square cells. The map shows circular footprints at individual isolated wells and merged footprint areas where wells' individual footprints overlap. Examples are presented for depth-rate indexes corresponding to recharge, to spatially variable stream baseflow (normalized by basin area), and to the average rate of water-table decline (scaled by specific yield). These depth-rate indexes are water-resource metrics, and the footprints visualize the magnitude of withdrawals relative to these metrics.
Map visualization of groundwater withdrawals at the sub-basin scale
Goode, Daniel J.
2016-01-01
A simple method is proposed to visualize the magnitude of groundwater withdrawals from wells relative to user-defined water-resource metrics. The map is solely an illustration of the withdrawal magnitudes, spatially centered on wells—it is not capture zones or source areas contributing recharge to wells. Common practice is to scale the size (area) of withdrawal well symbols proportional to pumping rate. Symbols are drawn large enough to be visible, but not so large that they overlap excessively. In contrast to such graphics-based symbol sizes, the proposed method uses a depth-rate index (length per time) to visualize the well withdrawal rates by volumetrically consistent areas, called “footprints”. The area of each individual well’s footprint is the withdrawal rate divided by the depth-rate index. For example, the groundwater recharge rate could be used as a depth-rate index to show how large withdrawals are relative to that recharge. To account for the interference of nearby wells, composite footprints are computed by iterative nearest-neighbor distribution of excess withdrawals on a computational and display grid having uniform square cells. The map shows circular footprints at individual isolated wells and merged footprint areas where wells’ individual footprints overlap. Examples are presented for depth-rate indexes corresponding to recharge, to spatially variable stream baseflow (normalized by basin area), and to the average rate of water-table decline (scaled by specific yield). These depth-rate indexes are water-resource metrics, and the footprints visualize the magnitude of withdrawals relative to these metrics.
Lu, Yansha; Simonett, Joseph M; Wang, Jie; Zhang, Miao; Hwang, Thomas; Hagag, Ahmed M; Huang, David; Li, Dengwang; Jia, Yali
2018-05-01
To describe an automated algorithm to quantify the foveal avascular zone (FAZ), using optical coherence tomography angiography (OCTA), and to compare its performance for diagnosis of diabetic retinopathy (DR) and association with best-corrected visual acuity (BCVA) to that of extrafoveal avascular area (EAA). We obtained 3 × 3-mm macular OCTA scans in diabetic patients with various levels of DR and healthy controls. An algorithm based on a generalized gradient vector flow (GGVF) snake model detected the FAZ, and metrics assessing FAZ size and irregularity were calculated. We compared the automated FAZ segmentation to manual delineation and tested the within-visit repeatability of FAZ metrics. The correlations of two conventional FAZ metrics, two novel FAZ metrics, and EAA with DR severity and BCVA, as determined by Early Treatment Diabetic Retinopathy Study (ETDRS) charts, were assessed. Sixty-six eyes from 66 diabetic patients and 19 control eyes from 19 healthy participants were included. The agreement between manual and automated FAZ delineation had a Jaccard index > 0.82, and the repeatability of automated FAZ detection was excellent in eyes at all levels of DR severity. FAZ metrics that incorporated both FAZ size and shape irregularity had the strongest correlation with clinical DR grade and BCVA. Of all the tested OCTA metrics, EAA had the greatest sensitivity in differentiating diabetic eyes without clinical evidence of retinopathy, mild to moderate nonproliferative DR (NPDR), and severe NPDR to proliferative DR from healthy controls. The GGVF snake algorithm tested in this study can accurately and reliably detect the FAZ, using OCTA data at all DR severity grades, and may be used to obtain clinically useful information from OCTA data regarding macular ischemia in patients with diabetes. While FAZ metrics can provide clinically useful information regarding macular ischemia, and possibly visual acuity potential, EAA measurements may be a better biomarker for DR.
Angermeier, P.L.; Davideanu, G.
2004-01-01
Multimetric biotic indices increasingly are used to complement physicochemical data in assessments of stream quality. We initiated development of multimetric indices, based on fish communities, to assess biotic integrity of streams in two physiographic regions of central Romania. Unlike previous efforts to develop such indices for European streams, our metrics and scoring criteria were selected largely on the basis of empirical relations in the regions of interest. We categorised 54 fish species with respect to ten natural-history attributes, then used this information to compute 32 candidate metrics of five types (taxonomic, tolerance, abundance, reproductive, and feeding) for each of 35 sites. We assessed the utility of candidate metrics for detecting anthropogenic impact based on three criteria: (a) range of values taken, (b) relation to a site-quality index (SQI), which incorporated information on hydrologic alteration, channel alteration, land-use intensity, and water chemistry, and (c) metric redundancy. We chose seven metrics from each region to include in preliminary multimetric indices (PMIs). Both PMIs included taxonomic, tolerance, and feeding metrics, but only two metrics were common to both PMIs. Although we could not validate our PMIs, their strong association with the SQI in each region suggests that such indices would be valuable tools for assessing stream quality and could provide more comprehensive assessments than the traditional approaches based solely on water chemistry.
Initial Ada components evaluation
NASA Technical Reports Server (NTRS)
Moebes, Travis
1989-01-01
The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.
Atreja, Ashish; Khan, Sameer; Rogers, Jason D; Otobo, Emamuzo; Patel, Nishant P; Ullman, Thomas; Colombel, Jean Fred; Moore, Shirley; Sands, Bruce E
2015-02-18
Inflammatory bowel disease (IBD) is a chronic condition of the bowel that affects over 1 million people in the United States. The recurring nature of disease makes IBD patients ideal candidates for patient-engaged care that is centered on enhanced self-management and improved doctor-patient communication. In IBD, optimal approaches to management vary for patients with different phenotypes and extent of disease and past surgical history. Hence, a single quality metric cannot define a heterogeneous disease such as IBD, unlike hypertension and diabetes. A more comprehensive assessment may be provided by complementing traditional quality metrics with measures of the patient's quality of life (QOL) through an application like HealthPROMISE. The objective of this pragmatic randomized controlled trial is to determine the impact of the HealthPROMISE app in improving outcomes (quality of care [QOC], QOL, patient adherence, disease control, and resource utilization) as compared to a patient education app. Our hypothesis is that a patient-centric self-monitoring and collaborative decision support platform will lead to sustainable improvement in overall QOL for IBD patients. Participants will be recruited during face-to-face visits and randomized to either an interventional (ie, HealthPROMISE) or control (ie, education app). Patients in the HealthPROMISE arm will be able to update their information and receive disease summary, quality metrics, and a graph showing the trend of QOL (SIBDQ) scores and resource utilization over time. Providers will use the data for collaborative decision making and quality improvement interventions at the point of care. Patients in the control arm will enter data at baseline, during office visits, and at the end of the study but will not receive any decision support (trend of QOL, alert, or dashboard views). Enrollment in the trial will be starting in first quarter of 2015. It is intended that up to 300 patients with IBD will be recruited into the study (with 1:1 allocation ratio). The primary endpoint is number of quality indicators met in HealthPROMISE versus control arm. Secondary endpoints include decrease in number of emergency visits due to IBD, decrease in number of hospitalization due to IBD, change in generic QOL score from baseline, proportion of patients in each group who meet all eligible outpatient quality metrics, and proportion of patients in disease control in each group. In addition, we plan to conduct protocol analysis of intervention patients with adequate HealthPROMISE utilization (more than 6 log-ins with data entry from week 0 through week 52) achieving above mentioned primary and secondary endpoints. HealthPROMISE is a unique cloud-based patient-reported outcome (PRO) and decision support tool that empowers both patients and providers. Patients track their QOL and symptoms, and providers can use the visual data in real time (integrated with electronic health records [EHRs]) to provide better care to their entire patient population. Using pragmatic trial design, we hope to show that IBD patients who participate in their own care and share in decision making have appreciably improved outcomes when compared to patients who do not. ClinicalTrials.gov NCT02322307; https://clinicaltrials.gov/ct2/show/NCT02322307 (Archived by WebCite at http://www.webcitation.org/6W8PoYThr).
Khan, Sameer; Rogers, Jason D; Otobo, Emamuzo; Patel, Nishant P; Ullman, Thomas; Colombel, Jean Fred; Moore, Shirley; Sands, Bruce E
2015-01-01
Background Inflammatory bowel disease (IBD) is a chronic condition of the bowel that affects over 1 million people in the United States. The recurring nature of disease makes IBD patients ideal candidates for patient-engaged care that is centered on enhanced self-management and improved doctor-patient communication. In IBD, optimal approaches to management vary for patients with different phenotypes and extent of disease and past surgical history. Hence, a single quality metric cannot define a heterogeneous disease such as IBD, unlike hypertension and diabetes. A more comprehensive assessment may be provided by complementing traditional quality metrics with measures of the patient’s quality of life (QOL) through an application like HealthPROMISE. Objective The objective of this pragmatic randomized controlled trial is to determine the impact of the HealthPROMISE app in improving outcomes (quality of care [QOC], QOL, patient adherence, disease control, and resource utilization) as compared to a patient education app. Our hypothesis is that a patient-centric self-monitoring and collaborative decision support platform will lead to sustainable improvement in overall QOL for IBD patients. Methods Participants will be recruited during face-to-face visits and randomized to either an interventional (ie, HealthPROMISE) or control (ie, education app). Patients in the HealthPROMISE arm will be able to update their information and receive disease summary, quality metrics, and a graph showing the trend of QOL (SIBDQ) scores and resource utilization over time. Providers will use the data for collaborative decision making and quality improvement interventions at the point of care. Patients in the control arm will enter data at baseline, during office visits, and at the end of the study but will not receive any decision support (trend of QOL, alert, or dashboard views). Results Enrollment in the trial will be starting in first quarter of 2015. It is intended that up to 300 patients with IBD will be recruited into the study (with 1:1 allocation ratio). The primary endpoint is number of quality indicators met in HealthPROMISE versus control arm. Secondary endpoints include decrease in number of emergency visits due to IBD, decrease in number of hospitalization due to IBD, change in generic QOL score from baseline, proportion of patients in each group who meet all eligible outpatient quality metrics, and proportion of patients in disease control in each group. In addition, we plan to conduct protocol analysis of intervention patients with adequate HealthPROMISE utilization (more than 6 log-ins with data entry from week 0 through week 52) achieving above mentioned primary and secondary endpoints. Conclusions HealthPROMISE is a unique cloud-based patient-reported outcome (PRO) and decision support tool that empowers both patients and providers. Patients track their QOL and symptoms, and providers can use the visual data in real time (integrated with electronic health records [EHRs]) to provide better care to their entire patient population. Using pragmatic trial design, we hope to show that IBD patients who participate in their own care and share in decision making have appreciably improved outcomes when compared to patients who do not. Trial Registration ClinicalTrials.gov NCT02322307; https://clinicaltrials.gov/ct2/show/NCT02322307 (Archived by WebCite at http://www.webcitation.org/6W8PoYThr). PMID:25693610
EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY
This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...
The principal focus of this project is the mapping and interpretation of landscape scale (i.e., broad scale) ecological metrics among contributing watersheds of the Upper White River, and the development of geospatial models of water quality vulnerability for several suspected no...
An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN
Piao, Xianglan; Qiu, Tie
2014-01-01
WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX. PMID:25250379
An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.
Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie
2014-01-01
WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.
Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil
2017-02-01
Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wood, Brian M.; Wood, Zoë J.
2006-01-01
We present a visualization and computation tool for modeling the caloric cost of pedestrian travel across three dimensional terrains. This tool is being used in ongoing archaeological research that analyzes how costs of locomotion affect the spatial distribution of trails and artifacts across archaeological landscapes. Throughout human history, traveling by foot has been the most common form of transportation, and therefore analyses of pedestrian travel costs are important for understanding prehistoric patterns of resource acquisition, migration, trade, and political interaction. Traditionally, archaeologists have measured geographic proximity based on "as the crow flies" distance. We propose new methods for terrain visualization and analysis based on measuring paths of least caloric expense, calculated using well established metabolic equations. Our approach provides a human centered metric of geographic closeness, and overcomes significant limitations of available Geographic Information System (GIS) software. We demonstrate such path computations and visualizations applied to archaeological research questions. Our system includes tools to visualize: energetic cost surfaces, comparisons of the elevation profiles of shortest paths versus least cost paths, and the display of paths of least caloric effort on Digital Elevation Models (DEMs). These analysis tools can be applied to calculate and visualize 1) likely locations of prehistoric trails and 2) expected ratios of raw material types to be recovered at archaeological sites.
Correlation analysis of respiratory signals by using parallel coordinate plots.
Saatci, Esra
2018-01-01
The understanding of the bonds and the relationships between the respiratory signals, i.e. the airflow, the mouth pressure, the relative temperature and the relative humidity during breathing may provide the improvement on the measurement methods of respiratory mechanics and sensor designs or the exploration of the several possible applications in the analysis of respiratory disorders. Therefore, the main objective of this study was to propose a new combination of methods in order to determine the relationship between respiratory signals as a multidimensional data. In order to reveal the coupling between the processes two very different methods were used: the well-known statistical correlation analysis (i.e. Pearson's correlation and cross-correlation coefficient) and parallel coordinate plots (PCPs). Curve bundling with the number intersections for the correlation analysis, Least Mean Square Time Delay Estimator (LMS-TDE) for the point delay detection and visual metrics for the recognition of the visual structures were proposed and utilized in PCP. The number of intersections was increased when the correlation coefficient changed from high positive to high negative correlation between the respiratory signals, especially if whole breath was processed. LMS-TDE coefficients plotted in PCP indicated well-matched point delay results to the findings in the correlation analysis. Visual inspection of PCB by visual metrics showed range, dispersions, entropy comparisons and linear and sinusoidal-like relationships between the respiratory signals. It is demonstrated that the basic correlation analysis together with the parallel coordinate plots perceptually motivates the visual metrics in the display and thus can be considered as an aid to the user analysis by providing meaningful views of the data. Copyright © 2017 Elsevier B.V. All rights reserved.
Fear of falling and postural reactivity in patients with glaucoma.
Daga, Fábio B; Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; Medeiros, Felipe A
2017-01-01
To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment.
Fear of falling and postural reactivity in patients with glaucoma
Daga, Fábio B.; Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; Medeiros, Felipe A.
2017-01-01
Purpose To investigate the relationship between postural metrics obtained by dynamic visual stimulation in a virtual reality environment and the presence of fear of falling in glaucoma patients. Methods This cross-sectional study included 35 glaucoma patients and 26 controls that underwent evaluation of postural balance by a force platform during presentation of static and dynamic visual stimuli with head-mounted goggles (Oculus Rift). In dynamic condition, a peripheral translational stimulus was used to induce vection and assess postural reactivity. Standard deviations of torque moments (SDTM) were calculated as indicative of postural stability. Fear of falling was assessed by a standardized questionnaire. The relationship between a summary score of fear of falling and postural metrics was investigated using linear regression models, adjusting for potentially confounding factors. Results Subjects with glaucoma reported greater fear of falling compared to controls (-0.21 vs. 0.27; P = 0.039). In glaucoma patients, postural metrics during dynamic visual stimulus were more associated with fear of falling (R2 = 18.8%; P = 0.001) than static (R2 = 3.0%; P = 0.005) and dark field (R2 = 5.7%; P = 0.007) conditions. In the univariable model, fear of falling was not significantly associated with binocular standard perimetry mean sensitivity (P = 0.855). In the multivariable model, each 1 Nm larger SDTM in anteroposterior direction during dynamic stimulus was associated with a worsening of 0.42 units in the fear of falling questionnaire score (P = 0.001). Conclusion In glaucoma patients, postural reactivity to a dynamic visual stimulus using a virtual reality environment was more strongly associated with fear of falling than visual field testing and traditional balance assessment. PMID:29211742
NASA Astrophysics Data System (ADS)
de Jesús-Crespo, Rebeca; Ramirez, Alonso
The growing need to protect stream ecosystems in Puerto Rico requires the development of monitoring procedures that help determine management priorities. Physical habitat assessments have been used to make quick evaluations that are cost efficient and easy conduct, yet they need to be studied further to understand their accuracy at predicting stream health. This study evaluated the efficiency of the Hawaii Stream Visual Assessment Protocol (HSVAP) at determining integrity of streams within the highly urbanized Rio Piedras watershed in Puerto Rico. To validate the protocol we compared results from HSVAP assessments conducted at 16 reaches with water quality and macroinvertebrate data collected at the same sites. Results from linear regressions between the water quality measures and HSVAP scores showed that there was no significant relationships ( R2 = 0.48; p = 0.08). This implies that the protocol is not supported by the water quality data. However, results from regressions between macroinvertebrate diversity and the number of families per site showed a significant positive relation with HSVAP scores ( R2 = 0.30; p = 0.02; R2 = 0.24; p = 0.05). In addition, a significant negative relation was observed between HSVAP scores and the Family Biotic Index (FBI) ( R2 = 0.32; p = 0.02). Comparisons between ratings obtained from the FBI and HSVAP scores suggest that the HSVAP classified sites as having higher quality than the biological metric. Based on these results, it can be concluded that the HSVAP is a good tool for a general assessment of the physical characteristics of a stream, but it needs modifications to accurately assess ecological quality of streams in Puerto Rico.
Metrication report to the Congress. 1991 activities and 1992 plans
NASA Technical Reports Server (NTRS)
1991-01-01
During 1991, NASA approved a revised metric use policy and developed a NASA Metric Transition Plan. This Plan targets the end of 1995 for completion of NASA's metric initiatives. This Plan also identifies future programs that NASA anticipates will use the metric system of measurement. Field installations began metric transition studies in 1991 and will complete them in 1992. Half of NASA's Space Shuttle payloads for 1991, and almost all such payloads for 1992, have some metric-based elements. In 1992, NASA will begin assessing requirements for space-quality piece parts fabricated to U.S. metric standards, leading to development and qualification of high priority parts.
Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform
NASA Astrophysics Data System (ADS)
Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo
2010-08-01
A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.
Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J
2016-01-01
The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eye tracking via an adaptation of the classical ‘visual word paradigm’. Healthy adults (N=20) selected the lexical item most related to a probe word in a 4-item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye-movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were none the less significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe-target similarity at least as well as latent semantic analysis ratings which are based on word co-occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. Whilst the adapted ‘visual word paradigm’ is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying abstract word comprehension. PMID:26901571
SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
Weissman, David E; Morrison, R Sean; Meier, Diane E
2010-02-01
Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.
The role of complexity metrics in a multi-institutional dosimetry audit of VMAT
Agnew, Christina E; Hussein, Mohammad; Tsang, Yatman; McWilliam, Alan; Hounsell, Alan R; Clark, Catharine H
2016-01-01
Objective: To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery. Methods: 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius® phantom and seven29® 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared. Results: For Varian® linear accelerators (Varian® Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = −0.84, p < 0.01). Conclusion: MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality. Advances in knowledge: Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery. PMID:26511276
Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R.; Roglic, Gojka; Sacks, David B.; Takei, Izumi
2016-01-01
Background A major objective of the IFCC Task Force on implementation of HbA1c standardization is to develop a model to define quality targets for HbA1c. Methods Two generic models, the Biological Variation and Sigma-metrics model, are investigated. Variables in the models were selected for HbA1c and data of EQA/PT programs were used to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. Results In the biological variation model 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the Sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP) 77% of the individual laboratories and 12 of 26 instrument groups met the 2 sigma criterion. Conclusion The Biological Variation and Sigma-metrics model were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The Sigma-metrics model is more flexible as both the TAE and the risk of failure can be adjusted to requirements related to e.g. use for diagnosis/monitoring or requirements of (inter)national authorities. With the aim of reaching international consensus on advice regarding quality targets for HbA1c, the Task Force suggests the Sigma-metrics model as the model of choice with default values of 5 mmol/mol (0.46%) for TAE, and risk levels of 2 and 4 sigma for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. PMID:25737535
The role of complexity metrics in a multi-institutional dosimetry audit of VMAT.
McGarry, Conor K; Agnew, Christina E; Hussein, Mohammad; Tsang, Yatman; McWilliam, Alan; Hounsell, Alan R; Clark, Catharine H
2016-01-01
To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery. 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius(®) phantom and seven29(®) 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared. For Varian(®) linear accelerators (Varian(®) Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = -0.84, p < 0.01). MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality. Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery.
A comprehensive quality control workflow for paired tumor-normal NGS experiments.
Schroeder, Christopher M; Hilke, Franz J; Löffler, Markus W; Bitzer, Michael; Lenz, Florian; Sturm, Marc
2017-06-01
Quality control (QC) is an important part of all NGS data analysis stages. Many available tools calculate QC metrics from different analysis steps of single sample experiments (raw reads, mapped reads and variant lists). Multi-sample experiments, as sequencing of tumor-normal pairs, require additional QC metrics to ensure validity of results. These multi-sample QC metrics still lack standardization. We therefore suggest a new workflow for QC of DNA sequencing of tumor-normal pairs. With this workflow well-known single-sample QC metrics and additional metrics specific for tumor-normal pairs can be calculated. The segmentation into different tools offers a high flexibility and allows reuse for other purposes. All tools produce qcML, a generic XML format for QC of -omics experiments. qcML uses quality metrics defined in an ontology, which was adapted for NGS. All QC tools are implemented in C ++ and run both under Linux and Windows. Plotting requires python 2.7 and matplotlib. The software is available under the 'GNU General Public License version 2' as part of the ngs-bits project: https://github.com/imgag/ngs-bits. christopher.schroeder@med.uni-tuebingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION
Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...
Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E
2016-09-08
The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be used as an independent standardized procedure for detector performance assessment. © 2016 The Authors.
Greene, Travis C.; Nishino, Thomas K.; Willis, Charles E.
2016-01-01
The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region‐of‐interest (ROI)‐based techniques to measure nonuniformity, minimum signal‐to‐noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX‐1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG‐150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG‐150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG‐150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG‐150 tests can be used as an independent standardized procedure for detector performance assessment. PACS number(s): 87.57.‐s, 87.57.C PMID:27685102
Revisiting the Procedures for the Vector Data Quality Assurance in Practice
NASA Astrophysics Data System (ADS)
Erdoğan, M.; Torun, A.; Boyacı, D.
2012-07-01
Immense use of topographical data in spatial data visualization, business GIS (Geographic Information Systems) solutions and applications, mobile and location-based services forced the topo-data providers to create standard, up-to-date and complete data sets in a sustainable frame. Data quality has been studied and researched for more than two decades. There have been un-countable numbers of references on its semantics, its conceptual logical and representations and many applications on spatial databases and GIS. However, there is a gap between research and practice in the sense of spatial data quality which increases the costs and decreases the efficiency of data production. Spatial data quality is well-known by academia and industry but usually in different context. The research on spatial data quality stated several issues having practical use such as descriptive information, metadata, fulfillment of spatial relationships among data, integrity measures, geometric constraints etc. The industry and data producers realize them in three stages; pre-, co- and post data capturing. The pre-data capturing stage covers semantic modelling, data definition, cataloguing, modelling, data dictionary and schema creation processes. The co-data capturing stage covers general rules of spatial relationships, data and model specific rules such as topologic and model building relationships, geometric threshold, data extraction guidelines, object-object, object-belonging class, object-non-belonging class, class-class relationships to be taken into account during data capturing. And post-data capturing stage covers specified QC (quality check) benchmarks and checking compliance to general and specific rules. The vector data quality criteria are different from the views of producers and users. But these criteria are generally driven by the needs, expectations and feedbacks of the users. This paper presents a practical method which closes the gap between theory and practice. Development of spatial data quality concepts into developments and application requires existence of conceptual, logical and most importantly physical existence of data model, rules and knowledge of realization in a form of geo-spatial data. The applicable metrics and thresholds are determined on this concrete base. This study discusses application of geo-spatial data quality issues and QA (quality assurance) and QC procedures in the topographic data production. Firstly we introduce MGCP (Multinational Geospatial Co-production Program) data profile of NATO (North Atlantic Treaty Organization) DFDD (DGIWG Feature Data Dictionary), the requirements of data owner, the view of data producers for both data capturing and QC and finally QA to fulfil user needs. Then, our practical and new approach which divides the quality into three phases is introduced. Finally, implementation of our approach to accomplish metrics, measures and thresholds of quality definitions is discussed. In this paper, especially geometry and semantics quality and quality control procedures that can be performed by the producers are discussed. Some applicable best-practices that we experienced on techniques of quality control, defining regulations that define the objectives and data production procedures are given in the final remarks. These quality control procedures should include the visual checks over the source data, captured vector data and printouts, some automatic checks that can be performed by software and some semi-automatic checks by the interaction with quality control personnel. Finally, these quality control procedures should ensure the geometric, semantic, attribution and metadata quality of vector data.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copeland, Alex; Brown, C. Titus
2011-10-13
DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... Elsevier, Quality & Metrics Department, Including Employees Located Throughout the United States Who Report to Miamisburg, OH; Lexis Nexis, a Subsidiary of Reed Elsevier, Quality & Metrics Department... Elsevier. The amended notice applicable to TA-W-80,205 and TA-W-80205A is hereby issued as follows: All...
Copeland, Alex; Brown, C. Titus
2018-04-27
DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
Visuo-spatial ability in colonoscopy simulator training.
Luursema, Jan-Maarten; Buzink, Sonja N; Verwey, Willem B; Jakimowicz, J J
2010-12-01
Visuo-spatial ability is associated with a quality of performance in a variety of surgical and medical skills. However, visuo-spatial ability is typically assessed using Visualization tests only, which led to an incomplete understanding of the involvement of visuo-spatial ability in these skills. To remedy this situation, the current study investigated the role of a broad range of visuo-spatial factors in colonoscopy simulator training. Fifteen medical trainees (no clinical experience in colonoscopy) participated in two psycho-metric test sessions to assess four visuo-spatial ability factors. Next, participants trained flexible endoscope manipulation, and navigation to the cecum on the GI Mentor II simulator, for four sessions within 1 week. Visualization, and to a lesser degree Spatial relations were the only visuo-spatial ability factors to correlate with colonoscopy simulator performance. Visualization additionally covaried with learning rate for time on task on both simulator tasks. High Visualization ability indicated faster exercise completion. Similar to other endoscopic procedures, performance in colonoscopy is positively associated with Visualization, a visuo-spatial ability factor characterized by the ability to mentally manipulate complex visuo-spatial stimuli. The complexity of the visuo-spatial mental transformations required to successfully perform colonoscopy is likely responsible for the challenging nature of this technique, and should inform training- and assessment design. Long term training studies, as well as studies investigating the nature of visuo-spatial complexity in this domain are needed to better understand the role of visuo-spatial ability in colonoscopy, and other endoscopic techniques.
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Can Technology Improve the Quality of Colonoscopy?
Thirumurthi, Selvi; Ross, William A; Raju, Gottumukkala S
2016-07-01
In order for screening colonoscopy to be an effective tool in reducing colon cancer incidence, exams must be performed in a high-quality manner. Quality metrics have been presented by gastroenterology societies and now include higher adenoma detection rate targets than in the past. In many cases, the quality of colonoscopy can often be improved with simple low-cost interventions such as improved procedure technique, implementing split-dose bowel prep, and monitoring individuals' performances. Emerging technology has expanded our field of view and image quality during colonoscopy. We will critically review several technological advances in the context of quality metrics and discuss if technology can really improve the quality of colonoscopy.
How visual attention is modified by disparities and textures changes?
NASA Astrophysics Data System (ADS)
Khaustova, Dar'ya; Fournier, Jérome; Wyckens, Emmanuel; Le Meur, Olivier
2013-03-01
The 3D image/video quality of experience is a multidimensional concept that depends on 2D image quality, depth quantity and visual comfort. The relationship between these parameters is not yet clearly defined. From this perspective, we aim to understand how texture complexity, depth quantity and visual comfort influence the way people observe 3D content in comparison with 2D. Six scenes with different structural parameters were generated using Blender software. For these six scenes, the following parameters were modified: texture complexity and the amount of depth changing the camera baseline and the convergence distance at the shooting side. Our study was conducted using an eye-tracker and a 3DTV display. During the eye-tracking experiment, each observer freely examined images with different depth levels and texture complexities. To avoid memory bias, we ensured that each observer had only seen scene content once. Collected fixation data were used to build saliency maps and to analyze differences between 2D and 3D conditions. Our results show that the introduction of disparity shortened saccade length; however fixation durations remained unaffected. An analysis of the saliency maps did not reveal any differences between 2D and 3D conditions for the viewing duration of 20 s. When the whole period was divided into smaller intervals, we found that for the first 4 s the introduced disparity was conducive to the section of saliency regions. However, this contribution is quite minimal if the correlation between saliency maps is analyzed. Nevertheless, we did not find that discomfort (comfort) had any influence on visual attention. We believe that existing metrics and methods are depth insensitive and do not reveal such differences. Based on the analysis of heat maps and paired t-tests of inter-observer visual congruency values we deduced that the selected areas of interest depend on texture complexities.
Quantifying and visualizing site performance in clinical trials.
Yang, Eric; O'Donovan, Christopher; Phillips, JodiLyn; Atkinson, Leone; Ghosh, Krishnendu; Agrafiotis, Dimitris K
2018-03-01
One of the keys to running a successful clinical trial is the selection of high quality clinical sites, i.e., sites that are able to enroll patients quickly, engage them on an ongoing basis to prevent drop-out, and execute the trial in strict accordance to the clinical protocol. Intuitively, the historical track record of a site is one of the strongest predictors of its future performance; however, issues such as data availability and wide differences in protocol complexity can complicate interpretation. Here, we demonstrate how operational data derived from central laboratory services can provide key insights into the performance of clinical sites and help guide operational planning and site selection for new clinical trials. Our methodology uses the metadata associated with laboratory kit shipments to clinical sites (such as trial and anonymized patient identifiers, investigator names and addresses, sample collection and shipment dates, etc.) to reconstruct the complete schedule of patient visits and derive insights about the operational performance of those sites, including screening, enrollment, and drop-out rates and other quality indicators. This information can be displayed in its raw form or normalized to enable direct comparison of site performance across studies of varied design and complexity. Leveraging Covance's market leadership in central laboratory services, we have assembled a database of operational metrics that spans more than 14,000 protocols, 1400 indications, 230,000 unique investigators, and 23 million patient visits and represents a significant fraction of all clinical trials run globally in the last few years. By analyzing this historical data, we are able to assess and compare the performance of clinical investigators across a wide range of therapeutic areas and study designs. This information can be aggregated across trials and geographies to gain further insights into country and regional trends, sometimes with surprising results. The use of operational data from Covance Central Laboratories provides a unique perspective into the performance of clinical sites with respect to many important metrics such as patient enrollment and retention. These metrics can, in turn, be used to guide operational planning and site selection for new clinical trials, thereby accelerating recruitment, improving quality, and reducing cost.
Sensorimotor Synchronization with Different Metrical Levels of Point-Light Dance Movements.
Su, Yi-Huang
2016-01-01
Rhythm perception and synchronization have been extensively investigated in the auditory domain, as they underlie means of human communication such as music and speech. Although recent studies suggest comparable mechanisms for synchronizing with periodically moving visual objects, the extent to which it applies to ecologically relevant information, such as the rhythm of complex biological motion, remains unknown. The present study addressed this issue by linking rhythm of music and dance in the framework of action-perception coupling. As a previous study showed that observers perceived multiple metrical periodicities in dance movements that embodied this structure, the present study examined whether sensorimotor synchronization (SMS) to dance movements resembles what is known of auditory SMS. Participants watched a point-light figure performing two basic steps of Swing dance cyclically, in which the trunk bounced at every beat and the limbs moved at every second beat, forming two metrical periodicities. Participants tapped synchronously to the bounce of the trunk with or without the limbs moving in the stimuli (Experiment 1), or tapped synchronously to the leg movements with or without the trunk bouncing simultaneously (Experiment 2). Results showed that, while synchronization with the bounce (lower-level pulse) was not influenced by the presence or absence of limb movements (metrical accent), synchronization with the legs (beat) was improved by the presence of the bounce (metrical subdivision) across different movement types. The latter finding parallels the "subdivision benefit" often demonstrated in auditory tasks, suggesting common sensorimotor mechanisms for visual rhythms in dance and auditory rhythms in music.
Neural correlates of the LSD experience revealed by multimodal neuroimaging.
Carhart-Harris, Robin L; Muthukumaraswamy, Suresh; Roseman, Leor; Kaelen, Mendel; Droog, Wouter; Murphy, Kevin; Tagliazucchi, Enzo; Schenberg, Eduardo E; Nest, Timothy; Orban, Csaba; Leech, Robert; Williams, Luke T; Williams, Tim M; Bolstridge, Mark; Sessa, Ben; McGonigle, John; Sereno, Martin I; Nichols, David; Hellyer, Peter J; Hobden, Peter; Evans, John; Singh, Krish D; Wise, Richard G; Curran, H Valerie; Feilding, Amanda; Nutt, David J
2016-04-26
Lysergic acid diethylamide (LSD) is the prototypical psychedelic drug, but its effects on the human brain have never been studied before with modern neuroimaging. Here, three complementary neuroimaging techniques: arterial spin labeling (ASL), blood oxygen level-dependent (BOLD) measures, and magnetoencephalography (MEG), implemented during resting state conditions, revealed marked changes in brain activity after LSD that correlated strongly with its characteristic psychological effects. Increased visual cortex cerebral blood flow (CBF), decreased visual cortex alpha power, and a greatly expanded primary visual cortex (V1) functional connectivity profile correlated strongly with ratings of visual hallucinations, implying that intrinsic brain activity exerts greater influence on visual processing in the psychedelic state, thereby defining its hallucinatory quality. LSD's marked effects on the visual cortex did not significantly correlate with the drug's other characteristic effects on consciousness, however. Rather, decreased connectivity between the parahippocampus and retrosplenial cortex (RSC) correlated strongly with ratings of "ego-dissolution" and "altered meaning," implying the importance of this particular circuit for the maintenance of "self" or "ego" and its processing of "meaning." Strong relationships were also found between the different imaging metrics, enabling firmer inferences to be made about their functional significance. This uniquely comprehensive examination of the LSD state represents an important advance in scientific research with psychedelic drugs at a time of growing interest in their scientific and therapeutic value. The present results contribute important new insights into the characteristic hallucinatory and consciousness-altering properties of psychedelics that inform on how they can model certain pathological states and potentially treat others.
Neural correlates of the LSD experience revealed by multimodal neuroimaging
Carhart-Harris, Robin L.; Muthukumaraswamy, Suresh; Roseman, Leor; Kaelen, Mendel; Droog, Wouter; Murphy, Kevin; Tagliazucchi, Enzo; Schenberg, Eduardo E.; Nest, Timothy; Orban, Csaba; Leech, Robert; Williams, Luke T.; Williams, Tim M.; Bolstridge, Mark; Sessa, Ben; McGonigle, John; Sereno, Martin I.; Nichols, David; Hobden, Peter; Evans, John; Singh, Krish D.; Wise, Richard G.; Curran, H. Valerie; Feilding, Amanda; Nutt, David J.
2016-01-01
Lysergic acid diethylamide (LSD) is the prototypical psychedelic drug, but its effects on the human brain have never been studied before with modern neuroimaging. Here, three complementary neuroimaging techniques: arterial spin labeling (ASL), blood oxygen level-dependent (BOLD) measures, and magnetoencephalography (MEG), implemented during resting state conditions, revealed marked changes in brain activity after LSD that correlated strongly with its characteristic psychological effects. Increased visual cortex cerebral blood flow (CBF), decreased visual cortex alpha power, and a greatly expanded primary visual cortex (V1) functional connectivity profile correlated strongly with ratings of visual hallucinations, implying that intrinsic brain activity exerts greater influence on visual processing in the psychedelic state, thereby defining its hallucinatory quality. LSD’s marked effects on the visual cortex did not significantly correlate with the drug’s other characteristic effects on consciousness, however. Rather, decreased connectivity between the parahippocampus and retrosplenial cortex (RSC) correlated strongly with ratings of “ego-dissolution” and “altered meaning,” implying the importance of this particular circuit for the maintenance of “self” or “ego” and its processing of “meaning.” Strong relationships were also found between the different imaging metrics, enabling firmer inferences to be made about their functional significance. This uniquely comprehensive examination of the LSD state represents an important advance in scientific research with psychedelic drugs at a time of growing interest in their scientific and therapeutic value. The present results contribute important new insights into the characteristic hallucinatory and consciousness-altering properties of psychedelics that inform on how they can model certain pathological states and potentially treat others. PMID:27071089
Xu, Xinxing; Li, Wen; Xu, Dong
2015-12-01
In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.
Interpreting lateral dynamic weight shifts using a simple inverted pendulum model.
Kennedy, Michael W; Bretl, Timothy; Schmiedeler, James P
2014-01-01
Seventy-five young, healthy adults completed a lateral weight-shifting activity in which each shifted his/her center of pressure (CoP) to visually displayed target locations with the aid of visual CoP feedback. Each subject's CoP data were modeled using a single-link inverted pendulum system with a spring-damper at the joint. This extends the simple inverted pendulum model of static balance in the sagittal plane to lateral weight-shifting balance. The model controlled pendulum angle using PD control and a ramp setpoint trajectory, and weight-shifting was characterized by both shift speed and a non-minimum phase (NMP) behavior metric. This NMP behavior metric examines the force magnitude at shift initiation and provides weight-shifting balance performance information that parallels the examination of peak ground reaction forces in gait analysis. Control parameters were optimized on a subject-by-subject basis to match balance metrics for modeled results to metric values calculated from experimental data. Overall, the model matches experimental data well (average percent error of 0.35% for shifting speed and 0.05% for NMP behavior). These results suggest that the single-link inverted pendulum model can be used effectively to capture lateral weight-shifting balance, as it has been shown to model static balance. Copyright © 2014 Elsevier B.V. All rights reserved.
Shape matters: animal colour patterns as signals of individual quality
2017-01-01
Colour patterns (e.g. irregular, spotted or barred forms) are widespread in the animal kingdom, yet their potential role as signals of quality has been mostly neglected. However, a review of the published literature reveals that pattern itself (irrespective of its size or colour intensity) is a promising signal of individual quality across species of many different taxa. We propose at least four main pathways whereby patterns may reliably reflect individual quality: (i) as conventional signals of status, (ii) as indices of developmental homeostasis, (iii) by amplifying cues of somatic integrity and (iv) by amplifying individual investment in maintenance activities. Methodological constraints have traditionally hampered research on the signalling potential of colour patterns. To overcome this, we report a series of tools (e.g. colour adjacency and pattern regularity analyses, Fourier and granularity approaches, fractal geometry, geometric morphometrics) that allow objective quantification of pattern variability. We discuss how information provided by these methods should consider the visual system of the model species and behavioural responses to pattern metrics, in order to allow biologically meaningful conclusions. Finally, we propose future challenges in this research area that will require a multidisciplinary approach, bringing together inputs from genetics, physiology, behavioural ecology and evolutionary-developmental biology. PMID:28228513
Developing a more useful surface quality metric for laser optics
NASA Astrophysics Data System (ADS)
Turchette, Quentin; Turner, Trey
2011-02-01
Light scatter due to surface defects on laser resonator optics produces losses which lower system efficiency and output power. The traditional methodology for surface quality inspection involves visual comparison of a component to scratch and dig (SAD) standards under controlled lighting and viewing conditions. Unfortunately, this process is subjective and operator dependent. Also, there is no clear correlation between inspection results and the actual performance impact of the optic in a laser resonator. As a result, laser manufacturers often overspecify surface quality in order to ensure that optics will not degrade laser performance due to scatter. This can drive up component costs and lengthen lead times. Alternatively, an objective test system for measuring optical scatter from defects can be constructed with a microscope, calibrated lighting, a CCD detector and image processing software. This approach is quantitative, highly repeatable and totally operator independent. Furthermore, it is flexible, allowing the user to set threshold levels as to what will or will not constitute a defect. This paper details how this automated, quantitative type of surface quality measurement can be constructed, and shows how its results correlate against conventional loss measurement techniques such as cavity ringdown times.
How (and why) the visual control of action differs from visual perception
Goodale, Melvyn A.
2014-01-01
Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions. PMID:24789899
Metric-driven harm: an exploration of unintended consequences of performance measurement.
Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck
2013-11-01
Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.
High-quality cardiopulmonary resuscitation: current and future directions.
Abella, Benjamin S
2016-06-01
Cardiopulmonary resuscitation (CPR) represents the cornerstone of cardiac arrest resuscitation care. Prompt delivery of high-quality CPR can dramatically improve survival outcomes; however, the definitions of optimal CPR have evolved over several decades. The present review will discuss the metrics of CPR delivery, and the evidence supporting the importance of CPR quality to improve clinical outcomes. The introduction of new technologies to quantify metrics of CPR delivery has yielded important insights into CPR quality. Investigations using CPR recording devices have allowed the assessment of specific CPR performance parameters and their relative importance regarding return of spontaneous circulation and survival to hospital discharge. Additional work has suggested new opportunities to measure physiologic markers during CPR and potentially tailor CPR delivery to patient requirements. Through recent laboratory and clinical investigations, a more evidence-based definition of high-quality CPR continues to emerge. Exciting opportunities now exist to study quantitative metrics of CPR and potentially guide resuscitation care in a goal-directed fashion. Concepts of high-quality CPR have also informed new approaches to training and quality improvement efforts for cardiac arrest care.
Calvin J. Maginel; Benjamin O. Knapp; John M. Kabrick; Rose-Marie Muzika
2016-01-01
Monitoring is a critical component of ecological restoration and requires the use of metrics that are meaningful and interpretable. We analyzed the effectiveness of the Floristic Quality Index (FQI), a vegetative community metric based on species richness and the level of sensitivity to anthropogenic disturbance of individual species present (Coefficient of...
Methods of Measurement the Quality Metrics in a Printing System
NASA Astrophysics Data System (ADS)
Varepo, L. G.; Brazhnikov, A. Yu; Nagornova, I. V.; Novoselskaya, O. A.
2018-04-01
One of the main criteria for choosing ink as a component of printing system is scumming ability of the ink. The realization of algorithm for estimating the quality metrics in a printing system is shown. The histograms of ink rate of various printing systems are presented. A quantitative estimation of stability of offset inks emulsifiability is given.
Pragmatic quality metrics for evolutionary software development models
NASA Technical Reports Server (NTRS)
Royce, Walker
1990-01-01
Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.
Metric Learning for Hyperspectral Image Segmentation
NASA Technical Reports Server (NTRS)
Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca
2011-01-01
We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.
Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud
2013-09-01
The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
Quantification and Visualization of Variation in Anatomical Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amenta, Nina; Datar, Manasi; Dirksen, Asger
This paper presents two approaches to quantifying and visualizing variation in datasets of trees. The first approach localizes subtrees in which significant population differences are found through hypothesis testing and sparse classifiers on subtree features. The second approach visualizes the global metric structure of datasets through low-distortion embedding into hyperbolic planes in the style of multidimensional scaling. A case study is made on a dataset of airway trees in relation to Chronic Obstructive Pulmonary Disease.
Manolov, Rumen; Jamieson, Matthew; Evans, Jonathan J; Sierra, Vicenta
2015-09-01
Single-case data analysis still relies heavily on visual inspection, and, at the same time, it is not clear to what extent the results of different quantitative procedures converge in identifying an intervention effect and its magnitude when applied to the same data; this is the type of evidence provided here for two procedures. One of the procedures, included due to the importance of providing objective criteria to visual analysts, is a visual aid fitting and projecting split-middle trend while taking into account data variability. The other procedure converts several different metrics into probabilities making their results comparable. In the present study, we expore to what extend these two procedures coincide in the magnitude of intervention effect taking place in a set of studies stemming from a recent meta-analysis. The procedures concur to a greater extent with the values of the indices computed and with each other and, to a lesser extent, with our own visual analysis. For distinguishing smaller from larger effects, the probability-based approach seems somewhat better suited. Moreover, the results of the field test suggest that the latter is a reasonably good mechanism for translating different metrics into similar labels. User friendly R code is provided for promoting the use of the visual aid, together with a quantification based on nonoverlap and the label provided by the probability approach. © The Author(s) 2015.
a Novel Ihs-Ga Fusion Method Based on Enhancement Vegetated Area
NASA Astrophysics Data System (ADS)
Niazi, S.; Mokhtarzade, M.; Saeedzadeh, F.
2015-12-01
Pan sharpening methods aim to produce a more informative image containing the positive aspects of both source images. However, the pan sharpening process usually introduces some spectral and spatial distortions in the resulting fused image. The amount of these distortions varies highly depending on the pan sharpening technique as well as the type of data. Among the existing pan sharpening methods, the Intensity-Hue-Saturation (IHS) technique is the most widely used for its efficiency and high spatial resolution. When the IHS method is used for IKONOS or QuickBird imagery, there is a significant color distortion which is mainly due to the wavelengths range of the panchromatic image. Regarding the fact that in the green vegetated regions panchromatic gray values are much larger than the gray values of intensity image. A novel method is proposed which spatially adjusts the intensity image in vegetated areas. To do so the normalized difference vegetation index (NDVI) is used to identify vegetation areas where the green band is enhanced according to the red and NIR bands. In this way an intensity image is obtained in which the gray values are comparable to the panchromatic image. Beside the genetic optimization algorithm is used to find the optimum weight parameters in order to gain the best intensity image. Visual and statistical analysis proved the efficiency of the proposed method as it significantly improved the fusion quality in comparison to conventional IHS technique. The accuracy of the proposed pan sharpening technique was also evaluated in terms of different spatial and spectral metrics. In this study, 7 metrics (Correlation Coefficient, ERGAS, RASE, RMSE, SAM, SID and Spatial Coefficient) have been used in order to determine the quality of the pan-sharpened images. Experiments were conducted on two different data sets obtained by two different imaging sensors, IKONOS and QuickBird. The result of this showed that the evaluation metrics are more promising for our fused image in comparison to other pan sharpening methods.
Emerging medical informatics research trends detection based on MeSH terms.
Lyu, Peng-Hui; Yao, Qiang; Mao, Jin; Zhang, Shi-Jing
2015-01-01
The aim of this study is to analyze the research trends of medical informatics over the last 12 years. A new method based on MeSH terms was proposed to identify emerging topics and trends of medical informatics research. Informetric methods and visualization technologies were applied to investigate research trends of medical informatics. The metric of perspective factor (PF) embedding MeSH terms was appropriately employed to assess the perspective quality for journals. The emerging MeSH terms have changed dramatically over the last 12 years, identifying two stages of medical informatics: the "medical imaging stage" and the "medical informatics stage". The focus of medical informatics has shifted from acquisition and storage of healthcare data by integrating computational, informational, cognitive and organizational sciences to semantic analysis for problem solving and clinical decision-making. About 30 core journals were determined by Bradford's Law in the last 3 years in this area. These journals, with high PF values, have relative high perspective quality and lead the trend of medical informatics.
A closer look at visually guided saccades in autism and Asperger’s disorder
Johnson, Beth P.; Rinehart, Nicole J.; Papadopoulos, Nicole; Tonge, Bruce; Millist, Lynette; White, Owen; Fielding, Joanne
2012-01-01
Motor impairments have been found to be a significant clinical feature associated with autism and Asperger’s disorder (AD) in addition to core symptoms of communication and social cognition deficits. Motor deficits in high-functioning autism (HFA) and AD may differentiate these disorders, particularly with respect to the role of the cerebellum in motor functioning. Current neuroimaging and behavioral evidence suggests greater disruption of the cerebellum in HFA than AD. Investigations of ocular motor functioning have previously been used in clinical populations to assess the integrity of the cerebellar networks, through examination of saccade accuracy and the integrity of saccade dynamics. Previous investigations of visually guided saccades in HFA and AD have only assessed basic saccade metrics, such as latency, amplitude, and gain, as well as peak velocity. We used a simple visually guided saccade paradigm to further characterize the profile of visually guided saccade metrics and dynamics in HFA and AD. It was found that children with HFA, but not AD, were more inaccurate across both small (5°) and large (10°) target amplitudes, and final eye position was hypometric at 10°. These findings suggest greater functional disturbance of the cerebellum in HFA than AD, and suggest fundamental difficulties with visual error monitoring in HFA. PMID:23162442
A composite measure to explore visual disability in primary progressive multiple sclerosis.
Poretto, Valentina; Petracca, Maria; Saiote, Catarina; Mormina, Enricomaria; Howard, Jonathan; Miller, Aaron; Lublin, Fred D; Inglese, Matilde
2017-01-01
Optical coherence tomography (OCT) and magnetic resonance imaging (MRI) can provide complementary information on visual system damage in multiple sclerosis (MS). The objective of this paper is to determine whether a composite OCT/MRI score, reflecting cumulative damage along the entire visual pathway, can predict visual deficits in primary progressive multiple sclerosis (PPMS). Twenty-five PPMS patients and 20 age-matched controls underwent neuro-ophthalmologic evaluation, spectral-domain OCT, and 3T brain MRI. Differences between groups were assessed by univariate general linear model and principal component analysis (PCA) grouped instrumental variables into main components. Linear regression analysis was used to assess the relationship between low-contrast visual acuity (LCVA), OCT/MRI-derived metrics and PCA-derived composite scores. PCA identified four main components explaining 80.69% of data variance. Considering each variable independently, LCVA 1.25% was significantly predicted by ganglion cell-inner plexiform layer (GCIPL) thickness, thalamic volume and optic radiation (OR) lesion volume (adjusted R 2 0.328, p = 0.00004; adjusted R 2 0.187, p = 0.002 and adjusted R 2 0.180, p = 0.002). The PCA composite score of global visual pathway damage independently predicted both LCVA 1.25% (adjusted R 2 value 0.361, p = 0.00001) and LCVA 2.50% (adjusted R 2 value 0.323, p = 0.00003). A multiparametric score represents a more comprehensive and effective tool to explain visual disability than a single instrumental metric in PPMS.
Compressed-Sensing Multi-Spectral Imaging of the Post-Operative Spine
Worters, Pauline W.; Sung, Kyunghyun; Stevens, Kathryn J.; Koch, Kevin M.; Hargreaves, Brian A.
2012-01-01
Purpose To apply compressed sensing (CS) to in vivo multi-spectral imaging (MSI), which uses additional encoding to avoid MRI artifacts near metal, and demonstrate the feasibility of CS-MSI in post-operative spinal imaging. Materials and Methods Thirteen subjects referred for spinal MRI were examined using T2-weighted MSI. A CS undersampling factor was first determined using a structural similarity index as a metric for image quality. Next, these fully sampled datasets were retrospectively undersampled using a variable-density random sampling scheme and reconstructed using an iterative soft-thresholding method. The fully- and under-sampled images were compared by using a 5-point scale. Prospectively undersampled CS-MSI data were also acquired from two subjects to ensure that the prospective random sampling did not affect the image quality. Results A two-fold outer reduction factor was deemed feasible for the spinal datasets. CS-MSI images were shown to be equivalent or better than the original MSI images in all categories: nerve visualization: p = 0.00018; image artifact: p = 0.00031; image quality: p = 0.0030. No alteration of image quality and T2 contrast was observed from prospectively undersampled CS-MSI. Conclusion This study shows that the inherently sparse nature of MSI data allows modest undersampling followed by CS reconstruction with no loss of diagnostic quality. PMID:22791572
A new metric to assess temporal coherence for video retargeting
NASA Astrophysics Data System (ADS)
Li, Ke; Yan, Bo; Yuan, Binhang
2014-10-01
In video retargeting, how to assess the performance in maintaining temporal coherence has become the prominent challenge. In this paper, we will present a new objective measurement to assess temporal coherence after video retargeting. It's a general metric to assess jittery artifact for both discrete and continuous video retargeting methods, the accuracy of which is verified by psycho-visual tests. As a result, our proposed assessment method possesses huge practical significance.
Societal Value of Surgery for Facial Reanimation.
Su, Peiyi; Ishii, Lisa E; Joseph, Andrew; Nellis, Jason; Dey, Jacob; Bater, Kristin; Byrne, Patrick J; Boahene, Kofi D O; Ishii, Masaru
2017-03-01
Patients with facial paralysis are perceived negatively by society in a number of domains. Society's perception of the health utility of varying degrees of facial paralysis and the value society places on reconstructive surgery for facial reanimation need to be quantified. To measure health state utility of varying degrees of facial paralysis, willingness to pay (WTP) for a repair, and the subsequent value of facial reanimation surgery as perceived by society. This prospective observational study conducted in an academic tertiary referral center evaluated a group of 348 casual observers who viewed images of faces with unilateral facial paralysis of 3 severity levels (low, medium, and high) categorized by House-Brackmann grade. Structural equation modeling was performed to understand associations among health utility metrics, WTP, and facial perception domains. Data were collected from July 16 to September 26, 2015. Observer-rated (1) quality of life (QOL) using established health utility metrics (standard gamble, time trade-off, and a visual analog scale) and (2) their WTP for surgical repair. Among the 348 observers (248 women [71.3%]; 100 men [28.7%]; mean [SD] age, 29.3 [11.6] years), mixed-effects linear regression showed that WTP increased nonlinearly with increasing severity of paralysis. Participants were willing to pay $3487 (95% CI, $2362-$4961) to repair low-grade paralysis, $8571 (95% CI, $6401-$11 234) for medium-grade paralysis, and $20 431 (95% CI, $16 273-$25 317) for high-grade paralysis. The dominant factor affecting the participants' WTP was perceived QOL. Modeling showed that perceived QOL decreased with paralysis severity (regression coefficient, -0.004; 95% CI, -0.005 to -0.004; P < .001) and increased with attractiveness (regression coefficient, 0.002; 95% CI, 0.002 to 0.003; P < .001). Mean (SD) health utility scores calculated by the standard gamble metric for low- and high-grade paralysis were 0.98 (0.09) and 0.77 (0.25), respectively. Time trade-off and visual analog scale measures were highly correlated. We calculated mean (SD) WTP per quality-adjusted life-year, which ranged from $10 167 ($14 565) to $17 008 ($38 288) for low- to high-grade paralysis, respectively. Society perceives the repair of facial paralysis to be a high-value intervention. Societal WTP increases and perceived health state utility decreases with increasing House-Brackmann grade. This study demonstrates the usefulness of WTP as an objective measure to inform dimensions of disease severity and signal the value society places on proper facial function. NA.
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
Toward determining melt pool quality metrics via coaxial monitoring in laser powder bed fusion.
Fisher, Brian A; Lane, Brandon; Yeung, Ho; Beuth, Jack
2018-01-01
The current industry trend in metal additive manufacturing is towards greater real time process monitoring capabilities during builds to ensure high quality parts. While the hardware implementations that allow for real time monitoring of the melt pool have advanced significantly, the knowledge required to correlate the generated data to useful metrics of interest are still lacking. This research presents promising results that aim to bridge this knowledge gap by determining a novel means to correlate easily obtainable sensor data (thermal emission) to key melt pool size metrics (e.g., melt pool cross sectional area).
Berkowitz, Seth A; Aragon, Katherine; Hines, Jonas; Seligman, Hilary; Lee, Sei; Sarkar, Urmimala
2013-01-01
Objective To determine whether diabetes clinical standards consider increased hypoglycemia risk in vulnerable patients. Data Sources MEDLINE, the National Guidelines Clearinghouse, the National Quality Measures Clearinghouse, and supplemental sources. Study Design Systematic review of clinical standards (guidelines, quality metrics, or pay-for-performance programs) for glycemic control in adult diabetes patients. The primary outcome was discussion of increased risk for hypoglycemia in vulnerable populations. Data Collection/Extraction Methods Manuscripts identified were abstracted by two independent reviewers using prespecified inclusion/exclusion criteria and a standardized abstraction form. Principal Findings We screened 1,166 titles, and reviewed 220 manuscripts in full text. Forty-four guidelines, 17 quality metrics, and 8 pay-for-performance programs were included. Five (11 percent) guidelines and no quality metrics or pay-for-performance programs met the primary outcome. Conclusions Clinical standards do not substantively incorporate evidence about increased risk for hypoglycemia in vulnerable populations. PMID:23445498
Berkowitz, Seth A; Aragon, Katherine; Hines, Jonas; Seligman, Hilary; Lee, Sei; Sarkar, Urmimala
2013-08-01
To determine whether diabetes clinical standards consider increased hypoglycemia risk in vulnerable patients. MEDLINE, the National Guidelines Clearinghouse, the National Quality Measures Clearinghouse, and supplemental sources. Systematic review of clinical standards (guidelines, quality metrics, or pay-for-performance programs) for glycemic control in adult diabetes patients. The primary outcome was discussion of increased risk for hypoglycemia in vulnerable populations. Manuscripts identified were abstracted by two independent reviewers using prespecified inclusion/exclusion criteria and a standardized abstraction form. We screened 1,166 titles, and reviewed 220 manuscripts in full text. Forty-four guidelines, 17 quality metrics, and 8 pay-for-performance programs were included. Five (11 percent) guidelines and no quality metrics or pay-for-performance programs met the primary outcome. Clinical standards do not substantively incorporate evidence about increased risk for hypoglycemia in vulnerable populations. © Health Research and Educational Trust.
Quality Measures for Dialysis: Time for a Balanced Scorecard
2016-01-01
Recent federal legislation establishes a merit-based incentive payment system for physicians, with a scorecard for each professional. The Centers for Medicare and Medicaid Services evaluate quality of care with clinical performance measures and have used these metrics for public reporting and payment to dialysis facilities. Similar metrics may be used for the future merit-based incentive payment system. In nephrology, most clinical performance measures measure processes and intermediate outcomes of care. These metrics were developed from population studies of best practice and do not identify opportunities for individualizing care on the basis of patient characteristics and individual goals of treatment. The In-Center Hemodialysis (ICH) Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey examines patients' perception of care and has entered the arena to evaluate quality of care. A balanced scorecard of quality performance should include three elements: population-based best clinical practice, patient perceptions, and individually crafted patient goals of care. PMID:26316622
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
Related Critical Psychometric Issues and Their Resolutions during Development of PE Metrics
ERIC Educational Resources Information Center
Fox, Connie; Zhu, Weimo; Park, Youngsik; Fisette, Jennifer L.; Graber, Kim C.; Dyson, Ben; Avery, Marybell; Franck, Marian; Placek, Judith H.; Rink, Judy; Raynes, De
2011-01-01
In addition to validity and reliability evidence, other psychometric qualities of the PE Metrics assessments needed to be examined. This article describes how those critical psychometric issues were addressed during the PE Metrics assessment bank construction. Specifically, issues included (a) number of items or assessments needed, (b) training…
National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?
Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N
2017-12-01
To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.
Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry
2011-01-01
Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22053864
Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry
2011-01-01
Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993
Modality-Driven Classification and Visualization of Ensemble Variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no informationmore » about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.« less
Comparison of measured electron energy spectra for six matched, radiotherapy accelerators.
McLaughlin, David J; Hogstrom, Kenneth R; Neck, Daniel W; Gibbons, John P
2018-05-01
This study compares energy spectra of the multiple electron beams of individual radiotherapy machines, as well as the sets of spectra across multiple matched machines. Also, energy spectrum metrics are compared with central-axis percent depth-dose (PDD) metrics. A lightweight, permanent magnet spectrometer was used to measure energy spectra for seven electron beams (7-20 MeV) on six matched Elekta Infinity accelerators with the MLCi2 treatment head. PDD measurements in the distal falloff region provided R 50 and R 80-20 metrics in Plastic Water ® , which correlated with energy spectrum metrics, peak mean energy (PME) and full-width at half maximum (FWHM). Visual inspection of energy spectra and their metrics showed whether beams on single machines were properly tuned, i.e., FWHM is expected to increase and peak height decrease monotonically with increased PME. Also, PME spacings are expected to be approximately equal for 7-13 MeV beams (0.5-cm R 90 spacing) and for 13-16 MeV beams (1.0-cm R 90 spacing). Most machines failed these expectations, presumably due to tolerances for initial beam matching (0.05 cm in R 90 ; 0.10 cm in R 80-20 ) and ongoing quality assurance (0.2 cm in R 50 ). Also, comparison of energy spectra or metrics for a single beam energy (six machines) showed outlying spectra. These variations in energy spectra provided ample data spread for correlating PME and FWHM with PDD metrics. Least-squares fits showed that R 50 and R 80-20 varied linearly and supralinearly with PME, respectively; however, both suggested a secondary dependence on FWHM. Hence, PME and FWHM could serve as surrogates for R 50 and R 80-20 for beam tuning by the accelerator engineer, possibly being more sensitive (e.g., 0.1 cm in R 80-20 corresponded to 2.0 MeV in FWHM). Results of this study suggest a lightweight, permanent magnet spectrometer could be a useful beam-tuning instrument for the accelerator engineer to (a) match electron beams prior to beam commissioning, (b) tune electron beams for the duration of their clinical use, and (c) provide estimates of PDD metrics following machine maintenance. However, a real-time version of the spectrometer is needed to be practical. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
WebGL Visualisation of 3D Environmental Models Based on Finnish Open Geospatial Data Sets
NASA Astrophysics Data System (ADS)
Krooks, A.; Kahkonen, J.; Lehto, L.; Latvala, P.; Karjalainen, M.; Honkavaara, E.
2014-08-01
Recent developments in spatial data infrastructures have enabled real time GIS analysis and visualization using open input data sources and service interfaces. In this study we present a new concept where metric point clouds derived from national open airborne laser scanning (ALS) and photogrammetric image data are processed, analyzed, finally visualised a through open service interfaces to produce user-driven analysis products from targeted areas. The concept is demonstrated in three environmental applications: assessment of forest storm damages, assessment of volumetric changes in open pit mine and 3D city model visualization. One of the main objectives was to study the usability and requirements of national level photogrammetric imagery in these applications. The results demonstrated that user driven 3D geospatial analyses were possible with the proposed approach and current technology, for instance, the landowner could assess the amount of fallen trees within his property borders after a storm easily using any web browser. On the other hand, our study indicated that there are still many uncertainties especially due to the insufficient standardization of photogrammetric products and processes and their quality indicators.
A fast fusion scheme for infrared and visible light images in NSCT domain
NASA Astrophysics Data System (ADS)
Zhao, Chunhui; Guo, Yunting; Wang, Yulei
2015-09-01
Fusion of infrared and visible light images is an effective way to obtain a simultaneous visualization of details of background provided by visible light image and hiding target information provided by infrared image, which is more suitable for browsing and further processing. Two crucial components for infrared and visual light image fusion are improving its fusion performance as well as reducing its computational burden. In this paper, a novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption. Besides, a fast realization of non-subsampled contourlet transform is also proposed in this paper to improve the computational efficiency. To verify the advantage of the proposed method, this paper compares it with several popular ones in six evaluation metrics over four different image groups. Experimental results show that the proposed algorithm gets a more effective result with much less time consuming and performs well in both subjective evaluation and objective indicators.
Pharmacy Dashboard: An Innovative Process for Pharmacy Workload and Productivity.
Kinney, Ashley; Bui, Quyen; Hodding, Jane; Le, Jennifer
2017-03-01
Background: Innovative approaches, including LEAN systems and dashboards, to enhance pharmacy production continue to evolve in a cost and safety conscious health care environment. Furthermore, implementing and evaluating the effectiveness of these novel methods continues to be challenging for pharmacies. Objective: To describe a comprehensive, real-time pharmacy dashboard that incorporated LEAN methodologies and evaluate its utilization in an inpatient Central Intravenous Additives Services (CIVAS) pharmacy. Methods: Long Beach Memorial Hospital (462 adult beds) and Miller Children's and Women's Hospital of Long Beach (combined 324 beds) are tertiary not-for-profit, community-based hospitals that are served by one CIVAS pharmacy. Metrics to evaluate the effectiveness of CIVAS were developed and implemented on a dashboard in real-time from March 2013 to March 2014. Results: The metrics that were designed and implemented to evaluate the effectiveness of CIVAS were quality and value, financial resilience, and the department's people and culture. Using a dashboard that integrated these metrics, the accuracy of manufacturing defect-free products was ≥99.9%, indicating excellent quality and value of CIVAS. The metric for financial resilience demonstrated a cost savings of $78,000 annually within pharmacy by eliminating the outsourcing of products. People and value metrics on the dashboard focused on standard work, with an overall 94.6% compliance to the workflow. Conclusion: A unique dashboard that incorporated metrics to monitor 3 important areas was successfully implemented to improve the effectiveness of CIVAS pharmacy. These metrics helped pharmacy to monitor progress in real-time, allowing attainment of production goals and fostering continuous quality improvement through LEAN work.
Pharmacy Dashboard: An Innovative Process for Pharmacy Workload and Productivity
Bui, Quyen; Hodding, Jane; Le, Jennifer
2017-01-01
Background: Innovative approaches, including LEAN systems and dashboards, to enhance pharmacy production continue to evolve in a cost and safety conscious health care environment. Furthermore, implementing and evaluating the effectiveness of these novel methods continues to be challenging for pharmacies. Objective: To describe a comprehensive, real-time pharmacy dashboard that incorporated LEAN methodologies and evaluate its utilization in an inpatient Central Intravenous Additives Services (CIVAS) pharmacy. Methods: Long Beach Memorial Hospital (462 adult beds) and Miller Children's and Women's Hospital of Long Beach (combined 324 beds) are tertiary not-for-profit, community-based hospitals that are served by one CIVAS pharmacy. Metrics to evaluate the effectiveness of CIVAS were developed and implemented on a dashboard in real-time from March 2013 to March 2014. Results: The metrics that were designed and implemented to evaluate the effectiveness of CIVAS were quality and value, financial resilience, and the department's people and culture. Using a dashboard that integrated these metrics, the accuracy of manufacturing defect-free products was ≥99.9%, indicating excellent quality and value of CIVAS. The metric for financial resilience demonstrated a cost savings of $78,000 annually within pharmacy by eliminating the outsourcing of products. People and value metrics on the dashboard focused on standard work, with an overall 94.6% compliance to the workflow. Conclusion: A unique dashboard that incorporated metrics to monitor 3 important areas was successfully implemented to improve the effectiveness of CIVAS pharmacy. These metrics helped pharmacy to monitor progress in real-time, allowing attainment of production goals and fostering continuous quality improvement through LEAN work. PMID:28439134
2014-06-01
increases quality of life , which, in turn, leads to better retention metrics; better retention metrics translate into higher experience levels...the quality of life for Airmen, particularly two-parent military families assigned to different AEFs.46 Cognizant of an already high operations...a desire to achieve the highest quality of life for Airmen. Ryan settled on a 1:4 AEF dwell ratio to ensure Airmen were not away from home- station
Getting started on metrics - Jet Propulsion Laboratory productivity and quality
NASA Technical Reports Server (NTRS)
Bush, M. W.
1990-01-01
A review is presented to describe the effort and difficulties of reconstructing fifteen years of JPL software history. In 1987 the collection and analysis of project data were started with the objective of creating laboratory-wide measures of quality and productivity for software development. As a result of this two-year Software Product Assurance metrics study, a rough measurement foundation for software productivity and software quality, and an order-of-magnitude quantitative baseline for software systems and subsystems are now available.
Thomas, Andreas; Shin, John; Jiang, Boyi; McMahon, Chantal; Kolassa, Ralf; Vigersky, Robert A
2018-01-01
Quantifying hypoglycemia has traditionally been limited to using the frequency of hypoglycemic events during a given time interval using data from blood glucose (BG) testing. However, continuous glucose monitoring (CGM) captures three parameters-a Hypo-Triad-unavailable with BG monitoring that can be used to better characterize hypoglycemia: area under the curve (AUC), time (duration of hypoglycemia), and frequency of daily episodes below a specified threshold. We developed two new analytic metrics to enhance the traditional Hypo-Triad of CGM-derived data to more effectively capture the intensity of hypoglycemia (IntHypo) and overall hypoglycemic environment called the "hypoglycemia risk volume" (HypoRV). We reanalyzed the CGM data from the ASPIRE In-Home study, a randomized, controlled trial of a sensor-integrated pump system with a low glucose threshold suspend feature (SIP+TS), using these new metrics and compared them to standard metrics of hypoglycemia. IntHypo and HypoRV provide additional insights into the benefit of a SIP+TS system on glycemic exposure when compared to the standard reporting methods. In addition, the visual display of these parameters provides a unique and intuitive way to understand the impact of a diabetes intervention on a cohort of subjects as well as on individual patients. The IntHypo and HypoRV are new and enhanced ways of analyzing CGM-derived data in diabetes intervention studies which could lead to new insights in diabetes management. They require validation using existing, ongoing, or planned studies to determine whether they are superior to existing metrics.
Sensorimotor Synchronization with Different Metrical Levels of Point-Light Dance Movements
Su, Yi-Huang
2016-01-01
Rhythm perception and synchronization have been extensively investigated in the auditory domain, as they underlie means of human communication such as music and speech. Although recent studies suggest comparable mechanisms for synchronizing with periodically moving visual objects, the extent to which it applies to ecologically relevant information, such as the rhythm of complex biological motion, remains unknown. The present study addressed this issue by linking rhythm of music and dance in the framework of action-perception coupling. As a previous study showed that observers perceived multiple metrical periodicities in dance movements that embodied this structure, the present study examined whether sensorimotor synchronization (SMS) to dance movements resembles what is known of auditory SMS. Participants watched a point-light figure performing two basic steps of Swing dance cyclically, in which the trunk bounced at every beat and the limbs moved at every second beat, forming two metrical periodicities. Participants tapped synchronously to the bounce of the trunk with or without the limbs moving in the stimuli (Experiment 1), or tapped synchronously to the leg movements with or without the trunk bouncing simultaneously (Experiment 2). Results showed that, while synchronization with the bounce (lower-level pulse) was not influenced by the presence or absence of limb movements (metrical accent), synchronization with the legs (beat) was improved by the presence of the bounce (metrical subdivision) across different movement types. The latter finding parallels the “subdivision benefit” often demonstrated in auditory tasks, suggesting common sensorimotor mechanisms for visual rhythms in dance and auditory rhythms in music. PMID:27199709
ERIC Educational Resources Information Center
Grané, Aurea; Romera, Rosario
2018-01-01
Survey data are usually of mixed type (quantitative, multistate categorical, and/or binary variables). Multidimensional scaling (MDS) is one of the most extended methodologies to visualize the profile structure of the data. Since the past 60s, MDS methods have been introduced in the literature, initially in publications in the psychometrics area.…
Systems Modeling to Improve River, Riparian, and Wetland Habitat Quality and Area
NASA Astrophysics Data System (ADS)
Alafifi, A.
2016-12-01
The suitability of watershed habitat to support the livelihood of its biota primarily depends on managing flow. Ecological restoration requires finding opportunities to reallocate available water in a watershed to increase ecological benefits and maintain other beneficial uses. We present the Watershed Area of Suitable Habitat (WASH) systems model that recommends reservoir releases, streamflows, and water allocations throughout a watershed to maximize the ecosystem habitat quality. WASH embeds and aggregates area-weighted metrics for aquatic, floodplain, and wetland habitat components as an ecosystem objective to maximize, while maintaining water deliveries for domestic and agricultural uses, mass balance, and available budget for restoration actions. The metrics add spatial and temporal functionality and area coverage to traditional habitat quality indexes and can accommodate multiple species of concern. We apply the WASH model to the Utah portion of the Bear River watershed which includes 8 demand sites, 5 reservoirs and 37 nodes between the Utah-Idaho state line and the Great Salt Lake. We recommend water allocations to improve current conservation efforts and show tradeoffs between human and ecosystem uses of water. WASH results are displayed on an open-source web mapping application that allows stakeholders to access, visualize, and interact with the model data and results and compare current and model-recommended operations. Results show that the Bear River is largely developed and appropriated for human water uses. However, increasing reservoirs winter and early spring releases and minimizing late spring spill volumes can significantly improve habitat quality without harming agricultural or urban water users. The spatial and temporal reallocation of spring spills to environmental uses creates additional 70 thousand acres of suitable habitat in the watershed without harming human users. WASH also quantifies the potential environmental gains and losses from conserving water and from the impact of climate change on head flows and thus helps planning for the future of our water resources and ecosystem.
Headphone and Head-Mounted Visual Displays for Virtual Environments
NASA Technical Reports Server (NTRS)
Begault, Duran R.; Ellis, Stephen R.; Wenzel, Elizabeth M.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
A realistic auditory environment can contribute to both the overall subjective sense of presence in a virtual display, and to a quantitative metric predicting human performance. Here, the role of audio in a virtual display and the importance of auditory-visual interaction are examined. Conjectures are proposed regarding the effectiveness of audio compared to visual information for creating a sensation of immersion, the frame of reference within a virtual display, and the compensation of visual fidelity by supplying auditory information. Future areas of research are outlined for improving simulations of virtual visual and acoustic spaces. This paper will describe some of the intersensory phenomena that arise during operator interaction within combined visual and auditory virtual environments. Conjectures regarding audio-visual interaction will be proposed.
The evaluative imaging of mental models - Visual representations of complexity
NASA Technical Reports Server (NTRS)
Dede, Christopher
1989-01-01
The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.
Comparison of tissue processing methods for microvascular visualization in axolotls.
Montoro, Rodrigo; Dickie, Renee
2017-01-01
The vascular system, the pipeline for oxygen and nutrient delivery to tissues, is essential for vertebrate development, growth, injury repair, and regeneration. With their capacity to regenerate entire appendages throughout their lifespan, axolotls are an unparalleled model for vertebrate regeneration, but they lack many of the molecular tools that facilitate vascular imaging in other animal models. The determination of vascular metrics requires high quality image data for the discrimination of vessels from background tissue. Quantification of the vasculature using perfused, cleared specimens is well-established in mammalian systems, but has not been widely employed in amphibians. The objective of this study was to optimize tissue preparation methods for the visualization of the microvascular network in axolotls, providing a basis for the quantification of regenerative angiogenesis. To accomplish this aim, we performed intracardiac perfusion of pigment-based contrast agents and evaluated aqueous and non-aqueous clearing techniques. The methods were verified by comparing the quality of the vascular images and the observable vascular density across treatment groups. Simple and inexpensive, these tissue processing techniques will be of use in studies assessing vascular growth and remodeling within the context of regeneration. Advantages of this method include: •Higher contrast of the vasculature within the 3D context of the surrounding tissue •Enhanced detection of microvasculature facilitating vascular quantification •Compatibility with other labeling techniques.
Comparing image quality of print-on-demand books and photobooks from web-based vendors
NASA Astrophysics Data System (ADS)
Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell
2010-01-01
Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.
Yet another family of diagonal metrics for de Sitter and anti-de Sitter spacetimes
NASA Astrophysics Data System (ADS)
Podolský, Jiří; Hruška, Ondřej
2017-06-01
In this work we present and analyze a new class of coordinate representations of de Sitter and anti-de Sitter spacetimes for which the metrics are diagonal and (typically) static and axially symmetric. Contrary to the well-known forms of these fundamental geometries, that usually correspond to a 1 +3 foliation with the 3-space of a constant spatial curvature, the new metrics are adapted to a 2 +2 foliation, and are warped products of two 2-spaces of constant curvature. This new class of (anti-)de Sitter metrics depends on the value of cosmological constant Λ and two discrete parameters +1 ,0 ,-1 related to the curvature of the 2-spaces. The class admits 3 distinct subcases for Λ >0 and 8 subcases for Λ <0 . We systematically study all these possibilities. In particular, we explicitly present the corresponding parametrizations of the (anti-)de Sitter hyperboloid, visualize the coordinate lines and surfaces within the global conformal cylinder, investigate their mutual relations, present some closely related forms of the metrics, and give transformations to standard de Sitter and anti-de Sitter metrics. Using these results, we also provide a physical interpretation of B -metrics as exact gravitational fields of a tachyon.
On Applying the Prognostic Performance Metrics
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2009-01-01
Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birchfield, Adam; Schweitzer, Eran; Athari, Mir
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...
2017-08-19
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
[Perceptual sharpness metric for visible and infrared color fusion images].
Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan
2012-12-01
For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.
The Assignment of Scale to Object-Oriented Software Measures
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; Weistroffer, H. Roland; Coppins, Richard J.
1997-01-01
In order to improve productivity (and quality), measurement of specific aspects of software has become imperative. As object oriented programming languages have become more widely used, metrics designed specifically for object-oriented software are required. Recently a large number of new metrics for object- oriented software has appeared in the literature. Unfortunately, many of these proposed metrics have not been validated to measure what they purport to measure. In this paper fifty (50) of these metrics are analyzed.
Algal bioassessment metrics for wadeable streams and rivers of Maine, USA
Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth
2011-01-01
Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.
Naidu, Ramana K.
2018-01-01
Abstract Background: Chronic pain associated with serious illnesses is having a major impact on population health in the United States. Accountability for high quality care for community-dwelling patients with serious illnesses requires selection of metrics that capture the burden of chronic pain whose treatment may be enhanced or complicated by opioid use. Objective: Our aim was to evaluate options for assessing pain in seriously ill community dwelling adults, to discuss the use/abuse of opioids in individuals with chronic pain, and to suggest pain and opioid use metrics that can be considered for screening and evaluation of patient responses and quality care. Design: Structured literature review. Measurements: Evaluation of pain and opioid use assessment metrics and measures for their potential usefulness in the community. Results: Several pain and opioid assessment instruments are available for consideration. Yet, no one pain instrument has been identified as “the best” to assess pain in seriously ill community-dwelling patients. Screening tools exist that are specific to the assessment of risk in opioid management. Opioid screening can assess risk based on substance use history, general risk taking, and reward-seeking behavior. Conclusions: Accountability for high quality care for community-dwelling patients requires selection of metrics that will capture the burden of chronic pain and beneficial use or misuse of opioids. Future research is warranted to identify, modify, or develop instruments that contain important metrics, demonstrate a balance between sensitivity and specificity, and address patient preferences and quality outcomes. PMID:29091525
Perceptually lossless fractal image compression
NASA Astrophysics Data System (ADS)
Lin, Huawu; Venetsanopoulos, Anastasios N.
1996-02-01
According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.
What Do Eye Gaze Metrics Tell Us about Motor Imagery?
Poiroux, Elodie; Cavaro-Ménard, Christine; Leruez, Stéphanie; Lemée, Jean Michel; Richard, Isabelle; Dinomais, Mickael
2015-01-01
Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Information risk and security modeling
NASA Astrophysics Data System (ADS)
Zivic, Predrag
2005-03-01
This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.
Requirement Metrics for Risk Identification
NASA Technical Reports Server (NTRS)
Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence
1996-01-01
The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.
Quality evaluation of motion-compensated edge artifacts in compressed video.
Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R
2007-04-01
Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.
Lee, Hyun-Ho; Lee, Sang-Kwon
2009-09-01
Booming sound is one of the important sounds in a passenger car. The aim of the paper is to develop the objective evaluation method of interior booming sound. The development method is based on the sound metrics and ANN (artificial neural network). The developed method is called the booming index. Previous work maintained that booming sound quality is related to loudness and sharpness--the sound metrics used in psychoacoustics--and that the booming index is developed by using the loudness and sharpness for a signal within whole frequency between 20 Hz and 20 kHz. In the present paper, the booming sound quality was found to be effectively related to the loudness at frequencies below 200 Hz; thus the booming index is updated by using the loudness of the signal filtered by the low pass filter at frequency under 200 Hz. The relationship between the booming index and sound metric is identified by an ANN. The updated booming index has been successfully applied to the objective evaluation of the booming sound quality of mass-produced passenger cars.
Evaluation of ride quality prediction methods for operational military helicopters
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.
1984-01-01
The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots' discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.
Auralization of NASA N+2 Aircraft Concepts from System Noise Predictions
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Burley, Casey L.; Thomas, Russel H.
2016-01-01
Auralization of aircraft flyover noise provides an auditory experience that complements integrated metrics obtained from system noise predictions. Recent efforts have focused on auralization methods development, specifically the process by which source noise information obtained from semi-empirical models, computational aeroacoustic analyses, and wind tunnel and flight test data, are used for simulated flyover noise at a receiver on the ground. The primary focus of this work, however, is to develop full vehicle auralizations in order to explore the distinguishing features of NASA's N+2 aircraft vis-à-vis current fleet reference vehicles for single-aisle and large twin-aisle classes. Some features can be seen in metric time histories associated with aircraft noise certification, e.g., tone-corrected perceived noise level used in the calculation of effective perceived noise level. Other features can be observed in sound quality metrics, e.g., loudness, sharpness, roughness, fluctuation strength and tone-to-noise ratio. A psychoacoustic annoyance model is employed to establish the relationship between sound quality metrics and noise certification metrics. Finally, the auralizations will serve as the basis for a separate psychoacoustic study aimed at assessing how well aircraft noise certification metrics predict human annoyance for these advanced vehicle concepts.
Xu, Z; Dela Cruz, J; Fthenakis, C; Saliou, C
2018-06-06
Measuring skin mechanical properties has been of great interest in the skincare industry. It is a high accuracy and non-invasive optical technique which quantitatively tracks skin movement and deformation under mechanical perturbations. A study was conducted with female subjects (25-65 years old). A refined speckle pattern applied onto the skin surface was used for DIC measurements. A unidirectional force pulled the skin at a constant velocity, while the deformation process was quantified by the DIC. Prior to the DIC measurement, Cutometer ® readings were taken on the same area. The DIC protocol's reproducibility across multiple pattern applications, the measurement's repeatability, and the sensitivity in differentiating skin mechanical properties were investigated. Subjects were clustered with statistical significance according to their skin mechanical properties described by six DIC metrics (μ [major strain], σ [major strain], μ [minor strain], σ [minor strain], μ [displacement], and σ [displacement]). Most measurement random errors are below 6%. This is several folds smaller in magnitude than the difference in the mean response between the clusters. Several Cutometer ® parameters also showed good agreement with μ (displacement). DIC was able to differentiate skins of different mechanical qualities. We also proposed the physical significance of the DIC metrics. Some of the DIC metrics potentially offer new insights into skin mechanical properties that complement those revealed by conventional instruments. Accurate measurements, large measurement areas along with ease of direct visualization are substantial advantages of DIC. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard
2015-08-01
In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics
Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E
2017-01-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052
Faithfulness of Recurrence Plots: A Mathematical Proof
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Komuro, Motomasa; Horai, Shunsuke; Aihara, Kazuyuki
It is practically known that a recurrence plot, a two-dimensional visualization of time series data, can contain almost all information related to the underlying dynamics except for its spatial scale because we can recover a rough shape for the original time series from the recurrence plot even if the original time series is multivariate. We here provide a mathematical proof that the metric defined by a recurrence plot [Hirata et al., 2008] is equivalent to the Euclidean metric under mild conditions.
Software quality: Process or people
NASA Technical Reports Server (NTRS)
Palmer, Regina; Labaugh, Modenna
1993-01-01
This paper will present data related to software development processes and personnel involvement from the perspective of software quality assurance. We examine eight years of data collected from six projects. Data collected varied by project but usually included defect and fault density with limited use of code metrics, schedule adherence, and budget growth information. The data are a blend of AFSCP 800-14 and suggested productivity measures in Software Metrics: A Practioner's Guide to Improved Product Development. A software quality assurance database tool, SQUID, was used to store and tabulate the data.
NASA Astrophysics Data System (ADS)
Garcia, Daniel D.; van de Pol, Corina; Barsky, Brian A.; Klein, Stanley A.
1999-06-01
Many current corneal topography instruments (called videokeratographs) provide an `acuity index' based on corneal smoothness to analyze expected visual acuity. However, post-refractive surgery patients often exhibit better acuity than is predicted by such indices. One reason for this is that visual acuity may not necessarily be determined by overall corneal smoothness but rather by having some part of the cornea able to focus light coherently onto the fovea. We present a new method of representing visual acuity by measuring the wavefront aberration, using principles from both ray and wave optics. For each point P on the cornea, we measure the size of the associated coherence area whose optical path length (OPL), from a reference plane to P's focus, is within a certain tolerance of the OPL for P. We measured the topographies and vision of 62 eyes of patients who had undergone the corneal refractive surgery procedures of photorefractive keratectomy (PRK) and photorefractive astigmatic keratectomy (PARK). In addition to high contrast visual acuity, our vision tests included low contrast and low luminance to test the contribution of the PRK transition zone. We found our metric for visual acuity to be better than all other metrics at predicting the acuity of low contrast and low luminance. However, high contrast visual acuity was poorly predicted by all of the indices we studied, including our own. The indices provided by current videokeratographs sometimes fail for corneas whose shape differs from simple ellipsoidal models. This is the case with post-PRK and post-PARK refractive surgery patients. Our alternative representation that displays the coherence area of the wavefront has considerable advantages, and promises to be a better predictor of low contrast and low luminance visual acuity than current shape measures.
qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*
Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart
2014-01-01
Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958
qcML: an exchange format for quality control metrics from mass spectrometry experiments.
Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart
2014-08-01
Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
Evaluating Core Quality for a Mars Sample Return Mission
NASA Technical Reports Server (NTRS)
Weiss, D. K.; Budney, C.; Shiraishi, L.; Klein, K.
2012-01-01
Sample return missions, including the proposed Mars Sample Return (MSR) mission, propose to collect core samples from scientifically valuable sites on Mars. These core samples would undergo extreme forces during the drilling process, and during the reentry process if the EEV (Earth Entry Vehicle) performed a hard landing on Earth. Because of the foreseen damage to the stratigraphy of the cores, it is important to evaluate each core for rock quality. However, because no core sample return mission has yet been conducted to another planetary body, it remains unclear as to how to assess the cores for rock quality. In this report, we describe the development of a metric designed to quantitatively assess the mechanical quality of any rock cores returned from Mars (or other planetary bodies). We report on the process by which we tested the metric on core samples of Mars analogue materials, and the effectiveness of the core assessment metric (CAM) in assessing rock core quality before and after the cores were subjected to shocking (g forces representative of an EEV landing).
Griffith, J.A.; Martinko, E.A.; Whistler, J.L.; Price, K.P.
2002-01-01
We explored relationships of water quality parameters with landscape pattern metrics (LPMs), land use-land cover (LULC) proportions, and the advanced very high resolution radiometer (AVHRR) normalized difference vegetation index (NDVI) or NDVI-derived metrics. Stream sites (271) in Nebraska, Kansas, and Missouri were sampled for water quality parameters, the index of biotic integrity, and a habitat index in either 1994 or 1995. Although a combination of LPMs (interspersion and juxtaposition index, patch density, and percent forest) within Ozark Highlands watersheds explained >60% of the variation in levels of nitrite-nitrate nitrogen and conductivity, in most cases the LPMs were not significantly correlated with the stream data. Several problems using landscape pattern metrics were noted: small watersheds having only one or two patches, collinearity with LULC data, and counterintuitive or inconsistent results that resulted from basic differences in land use-land cover patterns among ecoregions or from other factors determining water quality. The amount of variation explained in water quality parameters using multiple regression models that combined LULC and LPMs was generally lower than that from NDVI or vegetation phenology metrics derived from time-series NDVI data. A comparison of LPMs and NDVI indicated that NDVI had greater promise for monitoring landscapes for stream conditions within the study area.
Griffith, Jerry A; Martinko, Edward A; Whistler, Jerry L; Price, Kevin P
2002-01-01
We explored relationships of water quality parameters with landscape pattern metrics (LPMs), land use-land cover (LULC) proportions, and the advanced very high resolution radiometer (AVHRR) normalized difference vegetation index (NDVI) or NDVI-derived metrics. Stream sites (271) in Nebraska, Kansas, and Missouri were sampled for water quality parameters, the index of biotic integrity, and a habitat index in either 1994 or 1995. Although a combination of LPMs (interspersion and juxtaposition index, patch density, and percent forest) within Ozark Highlands watersheds explained >60% of the variation in levels of nitrite-nitrate nitrogen and conductivity, in most cases the LPMs were not significantly correlated with the stream data. Several problems using landscape pattern metrics were noted: small watersheds having only one or two patches, collinearity with LULC data, and counterintuitive or inconsistent results that resulted from basic differences in land use-land cover patterns among ecoregions or from other factors determining water quality. The amount of variation explained in water quality parameters using multiple regression models that combined LULC and LPMs was generally lower than that from NDVI or vegetation phenology metrics derived from time-series NDVI data. A comparison of LPMs and NDVI indicated that NDVI had greater promise for monitoring landscapes for stream conditions within the study area.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Quality Measures for Dialysis: Time for a Balanced Scorecard.
Kliger, Alan S
2016-02-05
Recent federal legislation establishes a merit-based incentive payment system for physicians, with a scorecard for each professional. The Centers for Medicare and Medicaid Services evaluate quality of care with clinical performance measures and have used these metrics for public reporting and payment to dialysis facilities. Similar metrics may be used for the future merit-based incentive payment system. In nephrology, most clinical performance measures measure processes and intermediate outcomes of care. These metrics were developed from population studies of best practice and do not identify opportunities for individualizing care on the basis of patient characteristics and individual goals of treatment. The In-Center Hemodialysis (ICH) Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey examines patients' perception of care and has entered the arena to evaluate quality of care. A balanced scorecard of quality performance should include three elements: population-based best clinical practice, patient perceptions, and individually crafted patient goals of care. Copyright © 2016 by the American Society of Nephrology.
Reduced reference image quality assessment via sub-image similarity based redundancy measurement
NASA Astrophysics Data System (ADS)
Mou, Xuanqin; Xue, Wufeng; Zhang, Lei
2012-03-01
The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.
Temel, Metin; Türkmen, Arif; Berberoğlu, Ömer
2016-05-01
Substantial fluctuations in body weight can result in diastasis recti and weakening of the connections between the lateral abdominal muscles and the rectus sheath. The authors sought to determine the postural and psychological effects of abdominoplasty with vertical rectus plication. Forty women with substantial back and lumbar pain owing to abdominal lipodystrophy were evaluated in a prospective study. Preoperatively and 6 months postoperatively, patients underwent bidirectional radiography of the thoracic and lumbar regions. A visual analog scale (VAS), the Beck Depression Inventory (BDI), and the Nottingham Health Profile (NHP) were applied to assess physical, psychological, and quality-of-life changes following surgery. Significant improvements in posture, assessed in terms of lumbar lordosis, thoracic kyphosis, and the lumbosacral angle, were observed 6 months after abdominoplasty with rectus plication. Results of the VAS and BDI indicated significant improvements in pain and quality of life, respectively. Results of the NHP indicated significant postoperative improvements in fatigue, pain, and sleep. Abdominoplasty with rectus plication improves posture by tightening the thoracolumbar fascia. In selected patients, abdominoplasty can reduce back and lumbar pain, thereby improving quality of life. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.
Brooks, Frank J; Grigsby, Perry W
2013-12-23
Many types of cancer are located and assessed via positron emission tomography (PET) using the 18F-fluorodeoxyglucose (FDG) radiotracer of glucose uptake. There is rapidly increasing interest in exploiting the intra-tumor heterogeneity observed in these FDG-PET images as an indicator of disease outcome. If this image heterogeneity is of genuine prognostic value, then it either correlates to known prognostic factors, such as tumor stage, or it indicates some as yet unknown tumor quality. Therefore, the first step in demonstrating the clinical usefulness of image heterogeneity is to explore the dependence of image heterogeneity metrics upon established prognostic indicators and other clinically interesting factors. If it is shown that image heterogeneity is merely a surrogate for other important tumor properties or variations in patient populations, then the theoretical value of quantified biological heterogeneity may not yet translate into the clinic given current imaging technology. We explore the relation between pelvic lymph node status at diagnosis and the visually evident uptake heterogeneity often observed in 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images of cervical carcinomas. We retrospectively studied the FDG-PET images of 47 node negative and 38 node positive patients, each having FIGO stage IIb tumors with squamous cell histology. Imaged tumors were segmented using 40% of the maximum tumor uptake as the tumor-defining threshold and then converted into sets of three-dimensional coordinates. We employed the sphericity, extent, Shannon entropy (S) and the accrued deviation from smoothest gradients (ζ) as image heterogeneity metrics. We analyze these metrics within tumor volume strata via: the Kolmogorov-Smirnov test, principal component analysis and contingency tables. We found no statistically significant difference between the positive and negative lymph node groups for any one metric or plausible combinations thereof. Additionally, we observed that S is strongly dependent upon tumor volume and that ζ moderately correlates with mean FDG uptake. FDG uptake heterogeneity did not indicate patients with differing prognoses. Apparent heterogeneity differences between clinical groups may be an artifact arising from either the dependence of some image metrics upon other factors such as tumor volume or upon the underlying variations in the patient populations compared.
Kumar, B. Vinodh; Mohan, Thuthi
2018-01-01
OBJECTIVE: Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. MATERIALS AND METHODS: This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. RESULTS: For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. CONCLUSION: This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes. PMID:29692587
Ford, Adriana E. S.; Smart, Simon M.; Henrys, Peter A.; Ashmore, Mike R.
2016-01-01
Atmospheric nitrogen (N) deposition has had detrimental effects on species composition in a range of sensitive habitats, although N deposition can also increase agricultural productivity and carbon storage, and favours a few species considered of importance for conservation. Conservation targets are multiple, and increasingly incorporate services derived from nature as well as concepts of intrinsic value. Priorities vary. How then should changes in a set of species caused by drivers such as N deposition be assessed? We used a novel combination of qualitative semi-structured interviews and quantitative ranking to elucidate the views of conservation professionals specialising in grasslands, heathlands and mires. Although conservation management goals are varied, terrestrial habitat quality is mainly assessed by these specialists on the basis of plant species, since these are readily observed. The presence and abundance of plant species that are scarce, or have important functional roles, emerged as important criteria for judging overall habitat quality. However, species defined as ‘positive indicator-species’ (not particularly scarce, but distinctive for the habitat) were considered particularly important. Scarce species are by definition not always found, and the presence of functionally important species is not a sufficient indicator of site quality. Habitat quality as assessed by the key informants was rank-correlated with the number of positive indicator-species present at a site for seven of the nine habitat classes assessed. Other metrics such as species-richness or a metric of scarcity were inconsistently or not correlated with the specialists’ assessments. We recommend that metrics of habitat quality used to assess N pollution impacts are based on the occurrence of, or habitat-suitability for, distinctive species. Metrics of this type are likely to be widely applicable for assessing habitat change in response to different drivers. The novel combined qualitative and quantitative approach taken to elucidate the priorities of conservation professionals could be usefully applied in other contexts. PMID:27557277
Eye Tracking Metrics for Workload Estimation in Flight Deck Operation
NASA Technical Reports Server (NTRS)
Ellis, Kyle; Schnell, Thomas
2010-01-01
Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.
Chen, Mo; Hyppa-Martin, Jolene K.; Reichle, Joe E.; Symons, Frank J.
2017-01-01
Meaningfully synthesizing single case experimental data from intervention studies comprised of individuals with low incidence conditions and generating effect size estimates remains challenging. Seven effect size metrics were compared for single case design (SCD) data focused on teaching speech generating device use to individuals with intellectual and developmental disabilities (IDD) with moderate to profound levels of impairment. The effect size metrics included percent of data points exceeding the median (PEM), percent of nonoverlapping data (PND), improvement rate difference (IRD), percent of all nonoverlapping data (PAND), Phi, nonoverlap of all pairs (NAP), and Taunovlap. Results showed that among the seven effect size metrics, PAND, Phi, IRD, and PND were more effective in quantifying intervention effects for the data sample (N = 285 phase or condition contrasts). Results are discussed with respect to issues concerning extracting and calculating effect sizes, visual analysis, and SCD intervention research in IDD. PMID:27119210
Development of Quality Metrics in Ambulatory Pediatric Cardiology.
Chowdhury, Devyani; Gurvitz, Michelle; Marelli, Ariane; Anderson, Jeffrey; Baker-Smith, Carissa; Diab, Karim A; Edwards, Thomas C; Hougen, Tom; Jedeikin, Roy; Johnson, Jonathan N; Karpawich, Peter; Lai, Wyman; Lu, Jimmy C; Mitchell, Stephanie; Newburger, Jane W; Penny, Daniel J; Portman, Michael A; Satou, Gary; Teitel, David; Villafane, Juan; Williams, Roberta; Jenkins, Kathy
2017-02-07
The American College of Cardiology Adult Congenital and Pediatric Cardiology (ACPC) Section had attempted to create quality metrics (QM) for ambulatory pediatric practice, but limited evidence made the process difficult. The ACPC sought to develop QMs for ambulatory pediatric cardiology practice. Five areas of interest were identified, and QMs were developed in a 2-step review process. In the first step, an expert panel, using the modified RAND-UCLA methodology, rated each QM for feasibility and validity. The second step sought input from ACPC Section members; final approval was by a vote of the ACPC Council. Work groups proposed a total of 44 QMs. Thirty-one metrics passed the RAND process and, after the open comment period, the ACPC council approved 18 metrics. The project resulted in successful development of QMs in ambulatory pediatric cardiology for a range of ambulatory domains. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Cook, T M; Coupe, M; Ku, T
2012-06-01
Measuring outcomes and quality in anaesthesia is challenging. In the UK, there is increased focus on these as a result of changes in Department of Health strategy and the imminent introduction of mandatory revalidation for all doctors. A definition of quality may differ according to the observer's standpoint and numerous performance measures may contribute to overall quality. Patients, surgeons, anaesthetic assistants, recovery nurses, managers, and anaesthetic peers are each likely to have their own perspective on 'anaesthetic quality' and would perhaps suggest different metrics to measure it. Speed, efficiency, cost, interpersonal skills, complication rates, patient recorded outcome measures, and satisfaction are all valid as quality measures, but none alone captures anaesthetic quality. Performance data are frequently presented as single-dimension measurements (e.g. pain, postoperative nausea and vomiting, patient satisfaction), but this does not address the fact that two or more domains may be closely related (e.g. use of regional anaesthesia and quality of analgesia) or in opposition (e.g. use of regional anaesthesia and speed). We introduce the concept of a 'performance polygon' as a tool to represent multidimensional performance assessment. This method of data presentation encourages balanced appraisal of anaesthetic quality. Performance polygons may be used to compare individual performance with peers, published outcome norms, trends in performance over time, to explore aspects of team performance and potentially capture data that are required for medical revalidation. Performance polygons enable easy comparison with any relevant data set and are a visual tool that potentially has wider applications in healthcare quality improvement.
Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.
2016-01-01
Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849
Evaluation of Postural Control in Patients with Glaucoma Using a Virtual Reality Environment.
Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A
2015-06-01
To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in patients with glaucoma. Cross-sectional study. The study involved 42 patients with glaucoma with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Torque moments around the center of foot pressure on the force platform were measured, and the standard deviations of the torque moments (STD) were calculated as a measurement of postural stability and reported in Newton meters (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Patients with glaucoma had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) and rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared with those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with a history of falls in patients with glaucoma (incidence rate ratio, 1.85; 95% confidence interval, 1.30-2.63; P = 0.001). The study presented and validated a novel paradigm for evaluation of balance control in patients with glaucoma on the basis of the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with a history of falls and may help to provide a better understanding of balance control in patients with glaucoma. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Evaluation of Postural Control in Glaucoma Patients Using a Virtual 1 Reality Environment
Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A.
2015-01-01
Purpose To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in glaucoma patients. Design Cross-sectional study. Participants The study involved 42 glaucoma patients with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Methods Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Main Outcome Measures Torque moments around the center of foot pressure on the force platform were measured and the standard deviations (STD) of these torque moments were calculated as a measurement of postural stability and reported in Newton meter (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Results Glaucoma patients had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) as well as rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared to those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with history of falls in glaucoma patients (incidence-rate ratio = 1.85; 95% CI: 1.30 – 2.63; P = 0.001). Conclusions The study presented and validated a novel paradigm for evaluation of balance control in glaucoma patients based on the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with history of falls and may help to provide a better understanding of balance control in glaucoma patients. PMID:25892017
ChiLin: a comprehensive ChIP-seq and DNase-seq quality control and analysis pipeline.
Qin, Qian; Mei, Shenglin; Wu, Qiu; Sun, Hanfei; Li, Lewyn; Taing, Len; Chen, Sujun; Li, Fugen; Liu, Tao; Zang, Chongzhi; Xu, Han; Chen, Yiwen; Meyer, Clifford A; Zhang, Yong; Brown, Myles; Long, Henry W; Liu, X Shirley
2016-10-03
Transcription factor binding, histone modification, and chromatin accessibility studies are important approaches to understanding the biology of gene regulation. ChIP-seq and DNase-seq have become the standard techniques for studying protein-DNA interactions and chromatin accessibility respectively, and comprehensive quality control (QC) and analysis tools are critical to extracting the most value from these assay types. Although many analysis and QC tools have been reported, few combine ChIP-seq and DNase-seq data analysis and quality control in a unified framework with a comprehensive and unbiased reference of data quality metrics. ChiLin is a computational pipeline that automates the quality control and data analyses of ChIP-seq and DNase-seq data. It is developed using a flexible and modular software framework that can be easily extended and modified. ChiLin is ideal for batch processing of many datasets and is well suited for large collaborative projects involving ChIP-seq and DNase-seq from different designs. ChiLin generates comprehensive quality control reports that include comparisons with historical data derived from over 23,677 public ChIP-seq and DNase-seq samples (11,265 datasets) from eight literature-based classified categories. To the best of our knowledge, this atlas represents the most comprehensive ChIP-seq and DNase-seq related quality metric resource currently available. These historical metrics provide useful heuristic quality references for experiment across all commonly used assay types. Using representative datasets, we demonstrate the versatility of the pipeline by applying it to different assay types of ChIP-seq data. The pipeline software is available open source at https://github.com/cfce/chilin . ChiLin is a scalable and powerful tool to process large batches of ChIP-seq and DNase-seq datasets. The analysis output and quality metrics have been structured into user-friendly directories and reports. We have successfully compiled 23,677 profiles into a comprehensive quality atlas with fine classification for users.
Cloud-based Computing and Applications of New Snow Metrics for Societal Benefit
NASA Astrophysics Data System (ADS)
Nolin, A. W.; Sproles, E. A.; Crumley, R. L.; Wilson, A.; Mar, E.; van de Kerk, M.; Prugh, L.
2017-12-01
Seasonal and interannual variability in snow cover affects socio-environmental systems including water resources, forest ecology, freshwater and terrestrial habitat, and winter recreation. We have developed two new seasonal snow metrics: snow cover frequency (SCF) and snow disappearance date (SDD). These metrics are calculated at 500-m resolution using NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data (MOD10A1). SCF is the number of times snow is observed in a pixel over the user-defined observation period. SDD is the last date of observed snow in a water year. These pixel-level metrics are calculated rapidly and globally in the Google Earth Engine cloud-based environment. SCF and SDD can be interactively visualized in a map-based interface, allowing users to explore spatial and temporal snowcover patterns from 2000-present. These metrics are especially valuable in regions where snow data are sparse or non-existent. We have used these metrics in several ongoing projects. When SCF was linked with a simple hydrologic model in the La Laguna watershed in northern Chile, it successfully predicted summer low flows with a Nash-Sutcliffe value of 0.86. SCF has also been used to help explain changes in Dall sheep populations in Alaska where sheep populations are negatively impacted by late snow cover and low snowline elevation during the spring lambing season. In forest management, SCF and SDD appear to be valuable predictors of post-wildfire vegetation growth. We see a positive relationship between winter SCF and subsequent summer greening for several years post-fire. For western US winter recreation, we are exploring trends in SDD and SCF for regions where snow sports are economically important. In a world with declining snowpacks and increasing uncertainty, these metrics extend across elevations and fill data gaps to provide valuable information for decision-making. SCF and SDD are being produced so that anyone with Internet access and a Google account can access, visualize, and download the data with a minimum of technical expertise and no need for proprietary software.
Xiao, Huaguo; Ji, Wei
2007-01-01
Landscape characteristics of a watershed are important variables that influence surface water quality. Understanding the relationship between these variables and surface water quality is critical in predicting pollution potential and developing watershed management practices to eliminate or reduce pollution risk. To understand the impacts of landscape characteristics on water quality in mine waste-located watersheds, we conducted a case study in the Tri-State Mining District which is located in the conjunction of three states (Missouri, Kansas and Oklahoma). Severe heavy metal pollution exists in that area resulting from historical mining activities. We characterized land use/land cover over the last three decades by classifying historical multi-temporal Landsat imagery. Landscape metrics such as proportion, edge density and contagion were calculated based on the classified imagery. In-stream water quality data over three decades were collected, including lead, zinc, iron, cadmium, aluminum and conductivity which were used as key water quality indicators. Statistical analyses were performed to quantify the relationship between landscape metrics and surface water quality. Results showed that landscape characteristics in mine waste-located watersheds could account for as much as 77% of the variation of water quality indicators. A single landscape metric alone, such as proportion of mine waste area, could be used to predict surface water quality; but its predicting power is limited, usually accounting for less than 60% of the variance of water quality indicators.
Empirical Evaluation of Hunk Metrics as Bug Predictors
NASA Astrophysics Data System (ADS)
Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz
Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.
Bumstead, Matt; Liang, Kunyu; Hanta, Gregory; Hui, Lok Shu; Turak, Ayse
2018-01-24
Order classification is particularly important in photonics, optoelectronics, nanotechnology, biology, and biomedicine, as self-assembled and living systems tend to be ordered well but not perfectly. Engineering sets of experimental protocols that can accurately reproduce specific desired patterns can be a challenge when (dis)ordered outcomes look visually similar. Robust comparisons between similar samples, especially with limited data sets, need a finely tuned ensemble of accurate analysis tools. Here we introduce our numerical Mathematica package disLocate, a suite of tools to rapidly quantify the spatial structure of a two-dimensional dispersion of objects. The full range of tools available in disLocate give different insights into the quality and type of order present in a given dispersion, accessing the translational, orientational and entropic order. The utility of this package allows for researchers to extract the variation and confidence range within finite sets of data (single images) using different structure metrics to quantify local variation in disorder. Containing all metrics within one package allows for researchers to easily and rapidly extract many different parameters simultaneously, allowing robust conclusions to be drawn on the order of a given system. Quantifying the experimental trends which produce desired morphologies enables engineering of novel methods to direct self-assembly.
Metrics for Offline Evaluation of Prognostic Performance
NASA Technical Reports Server (NTRS)
Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai
2010-01-01
Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
Dynamic allocation of attention to metrical and grouping accents in rhythmic sequences.
Kung, Shu-Jen; Tzeng, Ovid J L; Hung, Daisy L; Wu, Denise H
2011-04-01
Most people find it easy to perform rhythmic movements in synchrony with music, which reflects their ability to perceive the temporal periodicity and to allocate attention in time accordingly. Musicians and non-musicians were tested in a click localization paradigm in order to investigate how grouping and metrical accents in metrical rhythms influence attention allocation, and to reveal the effect of musical expertise on such processing. We performed two experiments in which the participants were required to listen to isochronous metrical rhythms containing superimposed clicks and then to localize the click on graphical and ruler-like representations with and without grouping structure information, respectively. Both experiments revealed metrical and grouping influences on click localization. Musical expertise improved the precision of click localization, especially when the click coincided with a metrically strong beat. Critically, although all participants located the click accurately at the beginning of an intensity group, only musicians located it precisely when it coincided with a strong beat at the end of the group. Removal of the visual cue of grouping structures enhanced these effects in musicians and reduced them in non-musicians. These results indicate that musical expertise not only enhances attention to metrical accents but also heightens sensitivity to perceptual grouping.
Recommended metric for tracking visibility progress in the Regional Haze Rule.
Gantt, Brett; Beaver, Melinda; Timin, Brian; Lorang, Phil
2018-05-01
For many national parks and wilderness areas with special air quality protections (Class I areas) in the western United States (U.S.), wildfire smoke and dust events can have a large impact on visibility. The U.S. Environmental Protection Agency's (EPA) 1999 Regional Haze Rule used the 20% haziest days to track visibility changes over time even if they are dominated by smoke or dust. Visibility on the 20% haziest days has remained constant or degraded over the last 16 yr at some Class I areas despite widespread emission reductions from anthropogenic sources. To better track visibility changes specifically associated with anthropogenic pollution sources rather than natural sources, the EPA has revised the Regional Haze Rule to track visibility on the 20% most anthropogenically impaired (hereafter, most impaired) days rather than the haziest days. To support the implementation of this revised requirement, the EPA has proposed (but not finalized) a recommended metric for characterizing the anthropogenic and natural portions of the daily extinction budget at each site. This metric selects the 20% most impaired days based on these portions using a "delta deciview" approach to quantify the deciview scale impact of anthropogenic light extinction. Using this metric, sulfate and nitrate make up the majority of the anthropogenic extinction in 2015 on these days, with natural extinction largely made up of organic carbon mass in the eastern U.S. and a combination of organic carbon mass, dust components, and sea salt in the western U.S. For sites in the western U.S., the seasonality of days selected as the 20% most impaired is different than the seasonality of the 20% haziest days, with many more winter and spring days selected. Applying this new metric to the 2000-2015 period across sites representing Class I areas results in substantial changes in the calculated visibility trend for the northern Rockies and southwest U.S., but little change for the eastern U.S. Changing the approach for tracking visibility in the Regional Haze Rule allows the EPA, states, and the public to track visibility on days when reductions in anthropogenic emissions have the greatest potential to improve the view. The calculations involved with the recommended metric can be incorporated into the routine IMPROVE (Interagency Monitoring of Protected Visual Environments) data processing, enabling rapid analysis of current and future visibility trends. Natural visibility conditions are important in the calculations for the recommended metric, necessitating the need for additional analysis and potential refinement of their values.
Development of quality metrics for ambulatory care in pediatric patients with tetralogy of Fallot.
Villafane, Juan; Edwards, Thomas C; Diab, Karim A; Satou, Gary M; Saarel, Elizabeth; Lai, Wyman W; Serwer, Gerald A; Karpawich, Peter P; Cross, Russell; Schiff, Russell; Chowdhury, Devyani; Hougen, Thomas J
2017-12-01
The objective of this study was to develop quality metrics (QMs) relating to the ambulatory care of children after complete repair of tetralogy of Fallot (TOF). A workgroup team (WT) of pediatric cardiologists with expertise in all aspects of ambulatory cardiac management was formed at the request of the American College of Cardiology (ACC) and the Adult Congenital and Pediatric Cardiology Council (ACPC), to review published guidelines and consensus data relating to the ambulatory care of repaired TOF patients under the age of 18 years. A set of quality metrics (QMs) was proposed by the WT. The metrics went through a two-step evaluation process. In the first step, the RAND-UCLA modified Delphi methodology was employed and the metrics were voted on feasibility and validity by an expert panel. In the second step, QMs were put through an "open comments" process where feedback was provided by the ACPC members. The final QMs were approved by the ACPC council. The TOF WT formulated 9 QMs of which only 6 were submitted to the expert panel; 3 QMs passed the modified RAND-UCLA and went through the "open comments" process. Based on the feedback through the open comment process, only 1 metric was finally approved by the ACPC council. The ACPC Council was able to develop QM for ambulatory care of children with repaired TOF. These patients should have documented genetic testing for 22q11.2 deletion. However, lack of evidence in the literature made it a challenge to formulate other evidence-based QMs. © 2017 Wiley Periodicals, Inc.
Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan
2014-12-01
Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.
A method for the use of landscape metrics in freshwater research and management
Kearns, F.R.; Kelly, N.M.; Carter, J.L.; Resh, V.H.
2005-01-01
Freshwater research and management efforts could be greatly enhanced by a better understanding of the relationship between landscape-scale factors and water quality indicators. This is particularly true in urban areas, where land transformation impacts stream systems at a variety of scales. Despite advances in landscape quantification methods, several studies attempting to elucidate the relationship between land use/land cover (LULC) and water quality have resulted in mixed conclusions. However, these studies have largely relied on compositional landscape metrics. For urban and urbanizing watersheds in particular, the use of metrics that capture spatial pattern may further aid in distinguishing the effects of various urban growth patterns, as well as exploring the interplay between environmental and socioeconomic variables. However, to be truly useful for freshwater applications, pattern metrics must be optimized based on characteristic watershed properties and common water quality point sampling methods. Using a freely available LULC data set for the Santa Clara Basin, California, USA, we quantified landscape composition and configuration for subwatershed areas upstream of individual sampling sites, reducing the number of metrics based on: (1) sensitivity to changes in extent and (2) redundancy, as determined by a multivariate factor analysis. The first two factors, interpreted as (1) patch density and distribution and (2) patch shape and landscape subdivision, explained approximately 85% of the variation in the data set, and are highly reflective of the heterogeneous urban development pattern found in the study area. Although offering slightly less explanatory power, compositional metrics can provide important contextual information. ?? Springer 2005.
EmailTime: visual analytics and statistics for temporal email
NASA Astrophysics Data System (ADS)
Erfani Joorabchi, Minoo; Yim, Ji-Dong; Shaw, Christopher D.
2011-01-01
Although the discovery and analysis of communication patterns in large and complex email datasets are difficult tasks, they can be a valuable source of information. We present EmailTime, a visual analysis tool of email correspondence patterns over the course of time that interactively portrays personal and interpersonal networks using the correspondence in the email dataset. Our approach is to put time as a primary variable of interest, and plot emails along a time line. EmailTime helps email dataset explorers interpret archived messages by providing zooming, panning, filtering and highlighting etc. To support analysis, it also measures and visualizes histograms, graph centrality and frequency on the communication graph that can be induced from the email collection. This paper describes EmailTime's capabilities, along with a large case study with Enron email dataset to explore the behaviors of email users within different organizational positions from January 2000 to December 2001. We defined email behavior as the email activity level of people regarding a series of measured metrics e.g. sent and received emails, numbers of email addresses, etc. These metrics were calculated through EmailTime. Results showed specific patterns in the use email within different organizational positions. We suggest that integrating both statistics and visualizations in order to display information about the email datasets may simplify its evaluation.
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.
1984-01-01
The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.
New Quality Metrics for Web Search Results
NASA Astrophysics Data System (ADS)
Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni
Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.
Speckle reduction during all-fiber common-path optical coherence tomography of the cavernous nerves
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M.
2009-02-01
Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery, which are responsible for erectile function, may improve nerve preservation and postoperative sexual potency. In this study, we use a rat prostate, ex vivo, to evaluate the feasibility of optical coherence tomography (OCT) as a diagnostic tool for real-time imaging and identification of the cavernous nerves. A novel OCT system based on an all single-mode fiber common-path interferometer-based scanning system is used for this purpose. A wavelet shrinkage denoising technique using Stein's unbiased risk estimator (SURE) algorithm to calculate a data-adaptive threshold is implemented for speckle noise reduction in the OCT image. The signal-to-noise ratio (SNR) was improved by 9 dB and the image quality metrics of the cavernous nerves also improved significantly.
NASA Astrophysics Data System (ADS)
Tejeda-Sánchez, C.; Muñoz-Nieto, A.; Rodríguez-Gonzálvez, P.
2018-05-01
Visualization and analysis use to be the final steps in Geomatics. This paper shows the workflow followed to set up a hybrid 3D archaeological viewer. Data acquisition of the site survey was done by means of low-cost close-range photogrammetric methods. With the aim not only to satisfy the general public but also the technicians, a large group of Geomatic products has been obtained (2d plans, 3d models, orthophotos, CAD models coming from vectorization, virtual anastylosis, and cross sections). Finally, all these products have been integrated into a three-dimensional archaeological information system. The hybrid archaeological viewer designed allows a metric and quality approach to the scientific analysis of the ruins, improving, thanks to the implementation of a database, and its potential for queries, the benefits of an ordinary topographic survey.
Tay, Benjamin Chia-Meng; Chow, Tzu-Hao; Ng, Beng-Koon; Loh, Thomas Kwok-Seng
2012-09-01
This study investigates the autocorrelation bandwidths of dual-window (DW) optical coherence tomography (OCT) k-space scattering profile of different-sized microspheres and their correlation to scatterer size. A dual-bandwidth spectroscopic metric defined as the ratio of the 10% to 90% autocorrelation bandwidths is found to change monotonically with microsphere size and gives the best contrast enhancement for scatterer size differentiation in the resulting spectroscopic image. A simulation model supports the experimental results and revealed a tradeoff between the smallest detectable scatterer size and the maximum scatterer size in the linear range of the dual-window dual-bandwidth (DWDB) metric, which depends on the choice of the light source optical bandwidth. Spectroscopic OCT (SOCT) images of microspheres and tonsil tissue samples based on the proposed DWDB metric showed clear differentiation between different-sized scatterers as compared to those derived from conventional short-time Fourier transform metrics. The DWDB metric significantly improves the contrast in SOCT imaging and can aid the visualization and identification of dissimilar scatterer size in a sample. Potential applications include the early detection of cell nuclear changes in tissue carcinogenesis, the monitoring of healing tendons, and cell proliferation in tissue scaffolds.
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
46 CFR 298.11 - Vessel requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... with accepted commercial experience and practice. (g) Metric Usage. Our preferred system of measurement and weights for Vessels and Shipyard Projects is the metric system. ...), classification societies to be ISO 9000 series registered or Quality Systems Certificate Scheme qualified IACS...
NASA Astrophysics Data System (ADS)
McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.
2017-03-01
Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.
Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S
2009-03-01
Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.
WE-G-204-09: Medical Physics 2.0 in Practice: Automated QC Assessment of Clinical Chest Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willis, C; Willis, C; Nishino, T
2015-06-15
Purpose: To determine whether a proposed suite of objective image quality metrics for digital chest radiographs is useful for monitoring image quality in our clinical operation. Methods: Seventeen gridless AP Chest radiographs from a GE Optima portable digital radiography (DR) unit (Group 1), seventeen (routine) PA Chest radiographs from a GE Discovery DR unit (Group 2), and sixteen gridless (non-routine) PA Chest radiographs from the same Discovery DR unit (Group 3) were chosen for analysis. Groups were selected to represent “sub-standard” (Group 1), “standard-of-care” (Group 2), and images with a gross technical error (Group 3). Group 1 images were acquiredmore » with lower kVp (90 vs. 125), shorter source-to-image distance (127cm vs 183cm) and were expected to have lower quality than images in Group 2. Group 3 was expected to have degraded contrast versus Group 2.This evaluation was approved by the institutional Quality Improvement Assurance Board (QIAB). Images were anonymized and securely transferred to the Duke University Clinical Imaging Physics Group for analysis using software previously described{sup 1} and validated{sup 2}. Image quality for individual images was reported in terms of lung grey level(Lgl); lung noise(Ln); rib-lung contrast(RLc); rib sharpness(Rs); mediastinum detail(Md), noise(Mn), and alignment(Ma); subdiaphragm-lung contrast(SLc); and subdiaphragm area(Sa). Metrics were compared across groups. Results: Metrics agreed with published Quality Consistency Ranges with three exceptions: higher Lgl, lower RLc, and SDc. Higher bit depth (16 vs 12) accounted for higher Lgl values in our images. Values were most internally consistent for Group 2. The most sensitive metric for distinguishing between groups was Mn followed closely by Ln. The least sensitive metrics were Md and RLc. Conclusion: The software appears promising for objectively and automatically identifying substandard images in our operation. The results can be used to establish local quality consistency ranges and action limits per facility preferences.« less
Predicting the Overall Spatial Quality of Automotive Audio Systems
NASA Astrophysics Data System (ADS)
Koya, Daisuke
The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R. 2 = 0.85 and root-mean-squareerror (RMSE) = 11.03%.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
Sigma Metrics Across the Total Testing Process.
Charuruks, Navapun
2017-03-01
Laboratory quality control has been developed for several decades to ensure patients' safety, from a statistical quality control focus on the analytical phase to total laboratory processes. The sigma concept provides a convenient way to quantify the number of errors in extra-analytical and analytical phases through the defect per million and sigma metric equation. Participation in a sigma verification program can be a convenient way to monitor analytical performance continuous quality improvement. Improvement of sigma-scale performance has been shown from our data. New tools and techniques for integration are needed. Copyright © 2016 Elsevier Inc. All rights reserved.
More quality measures versus measuring what matters: a call for balance and parsimony
Nelson, Eugene C; Pryor, David B; James, Brent; Swensen, Stephen J; Kaplan, Gary S; Weissberg, Jed I; Bisognano, Maureen; Yates, Gary R; Hunt, Gordon C
2012-01-01
External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Although benefits have accrued from the growth in quality measurement, the recent explosion in the number of measures threatens to shift resources from improving quality to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care, and lower per capita costs). Here we propose a policy that quality measurement should be: balanced to meet the need of end users to judge quality and cost performance and the need of providers to continuously improve the quality, outcomes and costs of their services; and parsimonious to measure quality, outcomes and costs with appropriate metrics that are selected based on end-user needs. PMID:22893696
More quality measures versus measuring what matters: a call for balance and parsimony.
Meyer, Gregg S; Nelson, Eugene C; Pryor, David B; James, Brent; Swensen, Stephen J; Kaplan, Gary S; Weissberg, Jed I; Bisognano, Maureen; Yates, Gary R; Hunt, Gordon C
2012-11-01
External groups requiring measures now include public and private payers, regulators, accreditors and others that certify performance levels for consumers, patients and payers. Although benefits have accrued from the growth in quality measurement, the recent explosion in the number of measures threatens to shift resources from improving quality to cover a plethora of quality-performance metrics that may have a limited impact on the things that patients and payers want and need (ie, better outcomes, better care, and lower per capita costs). Here we propose a policy that quality measurement should be: balanced to meet the need of end users to judge quality and cost performance and the need of providers to continuously improve the quality, outcomes and costs of their services; and parsimonious to measure quality, outcomes and costs with appropriate metrics that are selected based on end-user needs.
Automating Software Design Metrics.
1984-02-01
INTRODUCTION 1 ", ... 0..1 1.2 HISTORICAL PERSPECTIVE High quality software is of interest to both the software engineering com- munity and its users. As...contributions of many other software engineering efforts, most notably [MCC 77] and [Boe 83b], which have defined and refined a framework for quantifying...AUTOMATION OF DESIGN METRICS Software metrics can be useful within the context of an integrated soft- ware engineering environment. The purpose of this
Evaluation of image quality metrics for the prediction of subjective best focus.
Kilintari, Marina; Pallikaris, Aristophanis; Tsiklis, Nikolaos; Ginis, Harilaos S
2010-03-01
Seven existing and three new image quality metrics were evaluated in terms of their effectiveness in predicting subjective cycloplegic refraction. Monochromatic wavefront aberrations (WA) were measured in 70 eyes using a Shack-Hartmann based device (Complete Ophthalmic Analysis System; Wavefront Sciences). Subjective cycloplegic spherocylindrical correction was obtained using a standard manifest refraction procedure. The dioptric amount required to optimize each metric was calculated and compared with the subjective refraction result. Metrics included monochromatic and polychromatic variants, as well as variants taking into consideration the Stiles and Crawford effect (SCE). WA measurements were performed using infrared light and converted to visible before all calculations. The mean difference between subjective cycloplegic and WA-derived spherical refraction ranged from 0.17 to 0.36 diopters (D), while paraxial curvature resulted in a difference of 0.68 D. Monochromatic metrics exhibited smaller mean differences between subjective cycloplegic and objective refraction. Consideration of the SCE reduced the standard deviation (SD) of the difference between subjective and objective refraction. All metrics exhibited similar performance in terms of accuracy and precision. We hypothesize that errors pertaining to the conversion between infrared and visible wavelengths rather than calculation method may be the limiting factor in determining objective best focus from near infrared WA measurements.
Robustness surfaces of complex networks
NASA Astrophysics Data System (ADS)
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-01
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Dayan, Michael; Munoz, Monica; Jentschke, Sebastian; Chadwick, Martin J; Cooper, Janine M; Riney, Kate; Vargha-Khadem, Faraneh; Clark, Chris A
2015-01-01
The optic radiation (OR) is a component of the visual system known to be myelin mature very early in life. Diffusion tensor imaging (DTI) and its unique ability to reconstruct the OR in vivo were used to study structural maturation through analysis of DTI metrics in a cohort of 90 children aged 5-18 years. As the OR is at risk of damage during epilepsy surgery, we measured its position relative to characteristic anatomical landmarks. Anatomical distances, DTI metrics and volume of the OR were investigated for age, gender and hemisphere effects. We observed changes in DTI metrics with age comparable to known trajectories in other white matter tracts. Left lateralization of DTI metrics was observed that showed a gender effect in lateralization. Sexual dimorphism of DTI metrics in the right hemisphere was also found. With respect to OR dimensions, volume was shown to be right lateralised and sexual dimorphism demonstrated for the extent of the left OR. The anatomical results presented for the OR have potentially important applications for neurosurgical planning.
Robustness surfaces of complex networks.
Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis
2014-09-02
Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.
Jacob, Julie; Paques, Michel; Krivosic, Valérie; Dupas, Bénédicte; Erginay, Ali; Tadayoni, Ramin; Gaudric, Alain
2017-01-01
To analyze cone mosaic metrics on adaptive optics (AO) images as a function of retinal eccentricity in two different age groups using a commercial flood illumination AO device. Fifty-three eyes of 28 healthy subjects divided into two age groups were imaged using an AO flood-illumination camera (rtx1; Imagine Eyes, Orsay, France). A 16° × 4° field was obtained horizontally. Cone-packing metrics were determined in five neighboring 50 µm × 50 µm regions. Both retinal (cones/mm 2 and µm) and visual (cones/degrees 2 and arcmin) units were computed. Results for cone mosaic metrics at 2°, 2.5°, 3°, 4°, and 5° eccentricity were compatible with previous AO scanning laser ophthalmoscopy and histology data. No significant difference was observed between the two age groups. The rtx1 camera enabled reproducible measurements of cone-packing metrics across the extrafoveal retina. These findings may contribute to the development of normative data and act as a reference for future research. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:45-50.]. Copyright 2017, SLACK Incorporated.
Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David
2017-09-01
This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.
Beam uniformity of flat top lasers
NASA Astrophysics Data System (ADS)
Chang, Chao; Cramer, Larry; Danielson, Don; Norby, James
2015-03-01
Many beams that output from standard commercial lasers are multi-mode, with each mode having a different shape and width. They show an overall non-homogeneous energy distribution across the spot size. There may be satellite structures, halos and other deviations from beam uniformity. However, many scientific, industrial and medical applications require flat top spatial energy distribution, high uniformity in the plateau region, and complete absence of hot spots. Reliable standard methods for the evaluation of beam quality are of great importance. Standard methods are required for correct characterization of the laser for its intended application and for tight quality control in laser manufacturing. The International Organization for Standardization (ISO) has published standard procedures and definitions for this purpose. These procedures have not been widely adopted by commercial laser manufacturers. This is due to the fact that they are unreliable because an unrepresentative single-pixel value can seriously distort the result. We hereby propose a metric of beam uniformity, a way of beam profile visualization, procedures to automatically detect hot spots and beam structures, and application examples in our high energy laser production.
Mayhew, Stephen D; Porcaro, Camillo; Tecchio, Franca; Bagshaw, Andrew P
2017-03-01
A bilateral visuo-parietal-motor network is responsible for fine control of hand movements. However, the sub-regions which are devoted to maintenance of contraction stability and how these processes fluctuate with trial-quality of task execution and in the presence/absence of visual feedback remains unclear. We addressed this by integrating behavioural and fMRI measurements during right-hand isometric compression of a compliant rubber bulb, at 10% and 30% of maximum voluntary contraction, both with and without visual feedback of the applied force. We quantified single-trial behavioural performance during 1) the whole task period and 2) stable contraction maintenance, and regressed these metrics against the fMRI data to identify the brain activity most relevant to trial-by-trial fluctuations in performance during specific task phases. fMRI-behaviour correlations in a bilateral network of visual, premotor, primary motor, parietal and inferior frontal cortical regions emerged during performance of the entire feedback task, but only in premotor, parietal cortex and thalamus during the stable contraction period. The trials with the best task performance showed increased bilaterality and amplitude of fMRI responses. With feedback, stronger BOLD-behaviour coupling was found during 10% compared to 30% contractions. Only a small subset of regions in this network were weakly correlated with behaviour without feedback, despite wider network activated during this task than in the presence of feedback. These findings reflect a more focused network strongly coupled to behavioural fluctuations when providing visual feedback, whereas without it the task recruited widespread brain activity almost uncoupled from behavioural performance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Simulation of devices mobility to estimate wireless channel quality metrics in 5G networks
NASA Astrophysics Data System (ADS)
Orlov, Yu.; Fedorov, S.; Samuylov, A.; Gaidamaka, Yu.; Molchanov, D.
2017-07-01
The problem of channel quality estimation for devices in a wireless 5G network is formulated. As a performance metrics of interest we choose the signal-to-interference-plus-noise ratio, which depends essentially on the distance between the communicating devices. A model with a plurality of moving devices in a bounded three-dimensional space and a simulation algorithm to determine the distances between the devices for a given motion model are devised.
Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics
RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.
2015-01-01
BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665
Moss and vascular plant indices in Ohio wetlands have similar environmental predictors
Stapanian, Martin A.; Schumacher, William; Gara, Brian; Adams, Jean V.; Viau, Nick
2016-01-01
Mosses and vascular plants have been shown to be reliable indicators of wetland habitat delineation and environmental quality. Knowledge of the best ecological predictors of the quality of wetland moss and vascular plant communities may determine if similar management practices would simultaneously enhance both populations. We used Akaike's Information Criterion to identify models predicting a moss quality assessment index (MQAI) and a vascular plant index of biological integrity based on floristic quality (VIBI-FQ) from 27 emergent and 13 forested wetlands in Ohio, USA. The set of predictors included the six metrics from a wetlands disturbance index (ORAM) and two landscape development intensity indices (LDIs). The best single predictor of MQAI and one of the predictors of VIBI-FQ was an ORAM metric that assesses habitat alteration and disturbance within the wetland, such as mowing, grazing, and agricultural practices. However, the best single predictor of VIBI-FQ was an ORAM metric that assessed wetland vascular plant communities, interspersion, and microtopography. LDIs better predicted MQAI than VIBI-FQ, suggesting that mosses may either respond more rapidly to, or recover more slowly from, anthropogenic disturbance in the surrounding landscape than vascular plants. These results supported previous predictive studies on amphibian indices and metrics and a separate vegetation index, indicating that similar wetland management practices may result in qualitatively the same ecological response for three vastly different wetland biological communities (amphibians, vascular plants, and mosses).
An edge-directed interpolation method for fetal spine MR images.
Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin
2013-10-10
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.
Societal Value of Surgery for Facial Reanimation
Su, Peiyi; Ishii, Lisa E.; Joseph, Andrew; Nellis, Jason; Dey, Jacob; Bater, Kristin; Byrne, Patrick J.; Boahene, Kofi D. O.; Ishii, Masaru
2017-01-01
IMPORTANCE Patients with facial paralysis are perceived negatively by society in a number of domains. Society’s perception of the health utility of varying degrees of facial paralysis and the value society places on reconstructive surgery for facial reanimation need to be quantified. OBJECTIVE To measure health state utility of varying degrees of facial paralysis, willingness to pay (WTP) for a repair, and the subsequent value of facial reanimation surgery as perceived by society. DESIGN, SETTING, AND PARTICIPANTS This prospective observational study conducted in an academic tertiary referral center evaluated a group of 348 casual observers who viewed images of faces with unilateral facial paralysis of 3 severity levels (low, medium, and high) categorized by House-Brackmann grade. Structural equation modeling was performed to understand associations among health utility metrics, WTP, and facial perception domains. Data were collected from July 16 to September 26, 2015. MAIN OUTCOMES AND MEASURES Observer-rated (1) quality of life (QOL) using established health utility metrics (standard gamble, time trade-off, and a visual analog scale) and (2) their WTP for surgical repair. RESULTS Among the 348 observers (248 women [71.3%]; 100 men [28.7%]; mean [SD] age, 29.3 [11.6] years), mixed-effects linear regression showed that WTP increased nonlinearly with increasing severity of paralysis. Participants were willing to pay $3487 (95% CI, $2362–$4961) to repair low-grade paralysis, $8571 (95% CI, $6401–$11 234) for medium-grade paralysis, and $20 431 (95% CI, $16 273–$25 317) for high-grade paralysis. The dominant factor affecting the participants’ WTP was perceived QOL. Modeling showed that perceived QOL decreased with paralysis severity (regression coefficient, −0.004; 95% CI, −0.005 to −0.004; P < .001) and increased with attractiveness (regression coefficient, 0.002; 95% CI, 0.002 to 0.003; P < .001). Mean (SD) health utility scores calculated by the standard gamble metric for low- and high-grade paralysis were 0.98 (0.09) and 0.77 (0.25), respectively. Time trade-off and visual analog scale measures were highly correlated. We calculated mean (SD) WTP per quality-adjusted life-year, which ranged from $10 167 ($14 565) to $17 008 ($38 288) for low- to high-grade paralysis, respectively. CONCLUSIONS AND RELEVANCE Society perceives the repair of facial paralysis to be a high-value intervention. Societal WTP increases and perceived health state utility decreases with increasing House-Brackmann grade. This study demonstrates the usefulness of WTP as an objective measure to inform dimensions of disease severity and signal the value society places on proper facial function. LEVEL OF EVIDENCE NA. PMID:27892977
Robust and transferable quantification of NMR spectral quality using IROC analysis
NASA Astrophysics Data System (ADS)
Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.
2017-12-01
Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.
NASA Astrophysics Data System (ADS)
Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.
2014-06-01
This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Potential future impacts on visual air quality for Class I areas
Gurule Nochumson; Mona J. Wecksung
1979-01-01
Visual air quality is most sensitive to degradation by air pollution in areas with good visibility. The protection of visual air quality in mandatory Class I Federal areas has been declared a national goal by Congress. Impacts on visual air quality are calculated for 154 Class I areas where visual air quality is considered an important value. These impacts are...
Development and application of a novel metric to assess effectiveness of biomedical data
Bloom, Gregory C; Eschrich, Steven; Hang, Gang; Schabath, Matthew B; Bhansali, Neera; Hoerter, Andrew M; Morgan, Scott; Fenstermacher, David A
2013-01-01
Objective Design a metric to assess the comparative effectiveness of biomedical data elements within a study that incorporates their statistical relatedness to a given outcome variable as well as a measurement of the quality of their underlying data. Materials and methods The cohort consisted of 874 patients with adenocarcinoma of the lung, each with 47 clinical data elements. The p value for each element was calculated using the Cox proportional hazard univariable regression model with overall survival as the endpoint. An attribute or A-score was calculated by quantification of an element's four quality attributes; Completeness, Comprehensiveness, Consistency and Overall-cost. An effectiveness or E-score was obtained by calculating the conditional probabilities of the p-value and A-score within the given data set with their product equaling the effectiveness score (E-score). Results The E-score metric provided information about the utility of an element beyond an outcome-related p value ranking. E-scores for elements age-at-diagnosis, gender and tobacco-use showed utility above what their respective p values alone would indicate due to their relative ease of acquisition, that is, higher A-scores. Conversely, elements surgery-site, histologic-type and pathological-TNM stage were down-ranked in comparison to their p values based on lower A-scores caused by significantly higher acquisition costs. Conclusions A novel metric termed E-score was developed which incorporates standard statistics with data quality metrics and was tested on elements from a large lung cohort. Results show that an element's underlying data quality is an important consideration in addition to p value correlation to outcome when determining the element's clinical or research utility in a study. PMID:23975264
Louisiana waterthrush and benthic macroinvertebrate response to shale gas development
Wood, Petra; Frantz, Mack W.; Becker, Douglas A.
2016-01-01
Because shale gas development is occurring over large landscapes and consequently is affecting many headwater streams, an understanding of its effects on headwater-stream faunal communities is needed. We examined effects of shale gas development (well pads and associated infrastructure) on Louisiana waterthrush Parkesia motacilla and benthic macroinvertebrate communities in 12 West Virginia headwater streams in 2011. Streams were classed as impacted (n = 6) or unimpacted (n = 6) by shale gas development. We quantified waterthrush demography (nest success, clutch size, number of fledglings, territory density), a waterthrush Habitat Suitability Index, a Rapid Bioassessment Protocol habitat index, and benthic macroinvertebrate metrics including a genus-level stream-quality index for each stream. We compared each benthic metric between impacted and unimpacted streams with a Student's t-test that incorporated adjustments for normalizing data. Impacted streams had lower genus-level stream-quality index scores; lower overall and Ephemeroptera, Plecoptera, and Trichoptera richness; fewer intolerant taxa, more tolerant taxa, and greater density of 0–3-mm individuals (P ≤ 0.10). We then used Pearson correlation to relate waterthrush metrics to benthic metrics across the 12 streams. Territory density (no. of territories/km of stream) was greater on streams with higher genus-level stream-quality index scores; greater density of all taxa and Ephemeroptera, Plecoptera, and Trichoptera taxa; and greater biomass. Clutch size was greater on streams with higher genus-level stream-quality index scores. Nest survival analyses (n = 43 nests) completed with Program MARK suggested minimal influence of benthic metrics compared with nest stage and Habitat Suitability Index score. Although our study spanned only one season, our results suggest that shale gas development affected waterthrush and benthic communities in the headwater streams we studied. Thus, these ecological effects of shale gas development warrant closer examination.