CUQI: cardiac ultrasound video quality index
Razaak, Manzoor; Martini, Maria G.
2016-01-01
Abstract. Medical images and videos are now increasingly part of modern telecommunication applications, including telemedicinal applications, favored by advancements in video compression and communication technologies. Medical video quality evaluation is essential for modern applications since compression and transmission processes often compromise the video quality. Several state-of-the-art video quality metrics used for quality evaluation assess the perceptual quality of the video. For a medical video, assessing quality in terms of “diagnostic” value rather than “perceptual” quality is more important. We present a diagnostic-quality–oriented video quality metric for quality evaluation of cardiac ultrasound videos. Cardiac ultrasound videos are characterized by rapid repetitive cardiac motions and distinct structural information characteristics that are explored by the proposed metric. Cardiac ultrasound video quality index, the proposed metric, is a full reference metric and uses the motion and edge information of the cardiac ultrasound video to evaluate the video quality. The metric was evaluated for its performance in approximating the quality of cardiac ultrasound videos by testing its correlation with the subjective scores of medical experts. The results of our tests showed that the metric has high correlation with medical expert opinions and in several cases outperforms the state-of-the-art video quality metrics considered in our tests. PMID:27014715
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Research on quality metrics of wireless adaptive video streaming
NASA Astrophysics Data System (ADS)
Li, Xuefei
2018-04-01
With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
A no-reference video quality assessment metric based on ROI
NASA Astrophysics Data System (ADS)
Jia, Lixiu; Zhong, Xuefei; Tu, Yan; Niu, Wenjuan
2015-01-01
A no reference video quality assessment metric based on the region of interest (ROI) was proposed in this paper. In the metric, objective video quality was evaluated by integrating the quality of two compressed artifacts, i.e. blurring distortion and blocking distortion. The Gaussian kernel function was used to extract the human density maps of the H.264 coding videos from the subjective eye tracking data. An objective bottom-up ROI extraction model based on magnitude discrepancy of discrete wavelet transform between two consecutive frames, center weighted color opponent model, luminance contrast model and frequency saliency model based on spectral residual was built. Then only the objective saliency maps were used to compute the objective blurring and blocking quality. The results indicate that the objective ROI extraction metric has a higher the area under the curve (AUC) value. Comparing with the conventional video quality assessment metrics which measured all the video quality frames, the metric proposed in this paper not only decreased the computation complexity, but improved the correlation between subjective mean opinion score (MOS) and objective scores.
An objective method for a video quality evaluation in a 3DTV service
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2015-09-01
The following article describes proposed objective method for a 3DTV video quality evaluation, a Compressed Average Image Intensity (CAII) method. Identification of the 3DTV service's content chain nodes enables to design a versatile, objective video quality metric. It is based on an advanced approach to the stereoscopic videostream analysis. Insights towards designed metric mechanisms, as well as the evaluation of performance of the designed video quality metric, in the face of the simulated environmental conditions are herein discussed. As a result, created CAII metric might be effectively used in a variety of service quality assessment applications.
A no-reference image and video visual quality metric based on machine learning
NASA Astrophysics Data System (ADS)
Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy
2018-04-01
The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.
The compressed average image intensity metric for stereoscopic video quality assessment
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2016-09-01
The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
NASA Astrophysics Data System (ADS)
Ciaramello, Francis M.; Hemami, Sheila S.
2007-02-01
For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.
Real-time video quality monitoring
NASA Astrophysics Data System (ADS)
Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey
2011-12-01
The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.
PSQM-based RR and NR video quality metrics
NASA Astrophysics Data System (ADS)
Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu
2003-06-01
This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.
Compression performance comparison in low delay real-time video for mobile applications
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2012-10-01
This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.
No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.
Li, Xuelong; Guo, Qun; Lu, Xiaoqiang
2016-05-13
It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.
HealthTrust: a social network approach for retrieving online health videos.
Fernandez-Luque, Luis; Karlsen, Randi; Melton, Genevieve B
2012-01-31
Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust's filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r₁₀ = .65, P = .02) and a trend toward significance with health consumers (r₇ = .65, P = .06) with videos on hemoglobinA(1c), but it did not perform as well with diabetic foot videos. The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities.
A systematic review of usability test metrics for mobile video streaming apps
NASA Astrophysics Data System (ADS)
Hussain, Azham; Mkpojiogu, Emmanuel O. C.
2016-08-01
This paper presents the results of a systematic review regarding the usability test metrics for mobile video streaming apps. In the study, 238 studies were found, but only 51 relevant papers were eventually selected for the review. The study reveals that time taken for video streaming and the video quality were the two most popular metrics used in the usability tests for mobile video streaming apps. Besides, most of the studies concentrated on the usability of mobile TV as users are switching from traditional TV to mobile TV.
Quality evaluation of motion-compensated edge artifacts in compressed video.
Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R
2007-04-01
Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.
Degraded visual environment image/video quality metrics
NASA Astrophysics Data System (ADS)
Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.
2014-06-01
A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
HealthTrust: A Social Network Approach for Retrieving Online Health Videos
Karlsen, Randi; Melton, Genevieve B
2012-01-01
Background Social media are becoming mainstream in the health domain. Despite the large volume of accurate and trustworthy health information available on social media platforms, finding good-quality health information can be difficult. Misleading health information can often be popular (eg, antivaccination videos) and therefore highly rated by general search engines. We believe that community wisdom about the quality of health information can be harnessed to help create tools for retrieving good-quality social media content. Objectives To explore approaches for extracting metrics about authoritativeness in online health communities and how these metrics positively correlate with the quality of the content. Methods We designed a metric, called HealthTrust, that estimates the trustworthiness of social media content (eg, blog posts or videos) in a health community. The HealthTrust metric calculates reputation in an online health community based on link analysis. We used the metric to retrieve YouTube videos and channels about diabetes. In two different experiments, health consumers provided 427 ratings of 17 videos and professionals gave 162 ratings of 23 videos. In addition, two professionals reviewed 30 diabetes channels. Results HealthTrust may be used for retrieving online videos on diabetes, since it performed better than YouTube Search in most cases. Overall, of 20 potential channels, HealthTrust’s filtering allowed only 3 bad channels (15%) versus 8 (40%) on the YouTube list. Misleading and graphic videos (eg, featuring amputations) were more commonly found by YouTube Search than by searches based on HealthTrust. However, some videos from trusted sources had low HealthTrust scores, mostly from general health content providers, and therefore not highly connected in the diabetes community. When comparing video ratings from our reviewers, we found that HealthTrust achieved a positive and statistically significant correlation with professionals (Pearson r 10 = .65, P = .02) and a trend toward significance with health consumers (r 7 = .65, P = .06) with videos on hemoglobinA1 c, but it did not perform as well with diabetic foot videos. Conclusions The trust-based metric HealthTrust showed promising results when used to retrieve diabetes content from YouTube. Our research indicates that social network analysis may be used to identify trustworthy social media in health communities. PMID:22356723
Caffery, Liam J; Smith, Anthony C
2015-09-01
The use of fourth-generation (4G) mobile telecommunications to provide real-time video consultations were investigated in this study with the aims of determining if 4G is a suitable telecommunications technology; and secondly, to identify if variation in perceived audio and video quality were due to underlying network performance. Three patient end-points that used 4G Internet connections were evaluated. Consulting clinicians recorded their perception of audio and video quality using the International Telecommunications Union scales during clinics with these patient end-points. These scores were used to calculate a mean opinion score (MOS). The network performance metrics were obtained for each session and the relationships between these metrics and the session's quality scores were tested. Clinicians scored the quality of 50 hours of video consultations, involving 36 clinic sessions. The MOS for audio was 4.1 ± 0.62 and the MOS for video was 4.4 ± 0.22. Image impairment and effort to listen were also rated favourably. There was no correlation between audio or video quality and the network metrics of packet loss or jitter. These findings suggest that 4G networks are an appropriate telecommunication technology to deliver real-time video consultations. Variations in quality scores observed during this study were not explained by the packet loss and jitter in the underlying network. Before establishing a telemedicine service, the performance of the 4G network should be assessed at the location of the proposed service. This is due to known variability in performance of 4G networks. © The Author(s) 2015.
Audiovisual quality evaluation of low-bitrate video
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Faller, Christof
2005-03-01
Audiovisual quality assessment is a relatively unexplored topic. We designed subjective experiments for audio, video, and audiovisual quality using content and encoding parameters representative of video for mobile applications. Our focus were the MPEG-4 AVC (a.k.a. H.264) and AAC coding standards. Our goals in this study are two-fold: we want to understand the interactions between audio and video in terms of perceived audiovisual quality, and we use the subjective data to evaluate the prediction performance of our non-reference video and audio quality metrics.
Spatial-temporal distortion metric for in-service quality monitoring of any digital video system
NASA Astrophysics Data System (ADS)
Wolf, Stephen; Pinson, Margaret H.
1999-11-01
Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of (1) spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression, (2) the angular direction of the spatial gradient, (3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.
An Underwater Color Image Quality Evaluation Metric.
Yang, Miao; Sowmya, Arcot
2015-12-01
Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.
Design considerations for computationally constrained two-way real-time video communication
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.
2009-08-01
Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Deriving video content type from HEVC bitstream semantics
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.
2014-05-01
As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.
Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel
2017-07-28
New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.
Cañon, Daniel E; Lopez, Diego M; Blobel, Bernd
2014-01-01
Moderation of content in online Health Social Networks (HSN) is critical because information is not only published and produced by experts or health professionals, but also by users of that information. The objective of this paper is to propose a semi-automatic moderation Web Service for assessing the quality (trustworthiness) of health-related videos published on the YouTube social network. The service is relevant for moderators or community managers, who get enabled to control the quality of videos published on their online HSN sites. The HealthTrust metric was selected as the metric to be implemented in the service in order to support the assessment of trustworthiness of videos in Online HSN. The service is a RESTful service which can be integrated into open source Virtual Social Network Platforms, therefore improving trust in the process of searching and publishing content extracted from YouTube. A preliminary pilot evaluation in a simple use case demonstrated that the relevance of videos retrieved using the moderation service was higher compared to the relevance of the videos retrieved using the YouTube search engine.
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
Weighted-MSE based on saliency map for assessing video quality of H.264 video streams
NASA Astrophysics Data System (ADS)
Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.
2011-01-01
Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.
Dutta, Rahul; Yoon, Renai; Patel, Roshan M; Spradling, Kyle; Okhunov, Zhamshid; Sohn, William; Lee, Hak J; Landman, Jaime; Clayman, Ralph V
2017-06-01
To compare conventional videocystoscopy (CVC) with a novel and affordable (approximately $45) mobile cystoscopy system, the Endockscope (ES). We evaluated the ES system using both fluid (Endockscope-Fluid [ES-F]) and air (Endockscope-Air [ES-A]) to fill the bladder in an effort to expand the global range of flexible cystoscopy. The ES system comprised a portable 1000 lumen LED self-contained cordless light source and a three-dimensional printed adaptor that connects a mobile phone to a flexible fiber-optic cystoscope. Patients undergoing in-office cystoscopic evaluation for either stent removal or bladder cancer surveillance received three examinations: conventional, ES-F, and ES-A cystoscopy. Videos of each examination were recorded and analyzed by expert endoscopists for image quality/resolution, brightness, color quality, sharpness, overall quality, and whether or not they were acceptable for diagnostic purposes. Six of the 10 patients for whom the conventional videos were 100% acceptable for diagnostic purposes were included in our analysis. The conventional videos scored higher on every metric relative to both the ES-F and ES-A videos (p < 0.05). There was no difference between ES-F and ES-A videos on any metric. Fifty-two percent and 44% of the ES-F and ES-A videos, respectively, were considered acceptable for diagnostic purposes (p = 0.384). The ES mobile cystoscopy system may be a reasonable option in settings where electricity, sterile fluid irrigant, or access to CVC equipment is unavailable.
Stellefson, Michael; Chaney, Beth; Ochipa, Kathleen; Chaney, Don; Haider, Zeerak; Hanik, Bruce; Chavarria, Enmanuel; Bernhardt, Jay M
2014-05-01
The aim of the present study is to conduct a social media content analysis of chronic obstructive pulmonary disease (COPD) patient education videos on YouTube. A systematic search protocol was used to locate 223 videos. Two independent coders evaluated each video to determine topics covered, media source(s) of posted videos, information quality as measured by HONcode guidelines for posting trustworthy health information on the Internet, and viewer exposure/engagement metrics. Over half the videos (n = 113, 50.7%) included information on medication management, with far fewer videos on smoking cessation (n = 40, 17.9%). Most videos were posted by a health agency or organization (n = 128, 57.4%), and the majority of videos were rated as high quality (n = 154, 69.1%). HONcode adherence differed by media source (Fisher's exact test = 20.52, p = 0.01), however with user-generated content receiving the lowest quality scores. Overall level of user engagement as measured by number of "likes," "favorites," "dislikes," and user comments was low (median range = 0-3, interquartile range = 0-16) across all sources of media. Study findings suggest that COPD education via YouTube has the potential to reach and inform patients; however, existing video content and quality varies significantly. Future interventions should help direct individuals with COPD to engage with high-quality patient education videos on YouTube that are posted by reputable health organizations and qualified medical professionals. Patients should be educated to avoid and/or critically view low-quality videos posted by individual YouTube users who are not health professionals.
YouTube as a source of COPD patient education: A social media content analysis
Stellefson, Michael; Chaney, Beth; Ochipa, Kathleen; Chaney, Don; Haider, Zeerak; Hanik, Bruce; Chavarria, Enmanuel; Bernhardt, Jay M.
2014-01-01
Objective Conduct a social media content analysis of COPD patient education videos on YouTube. Methods A systematic search protocol was used to locate 223 videos. Two independent coders evaluated each video to determine topics covered, media source(s) of posted videos, information quality as measured by HONcode guidelines for posting trustworthy health information on the Internet, and viewer exposure/engagement metrics. Results Over half the videos (n=113, 50.7%) included information on medication management, with far fewer videos on smoking cessation (n=40, 17.9%). Most videos were posted by a health agency or organization (n=128, 57.4%), and the majority of videos were rated as high quality (n=154, 69.1%). HONcode adherence differed by media source (Fisher’s Exact Test=20.52, p=.01), with user-generated content (UGC) receiving the lowest quality scores. Overall level of user engagement as measured by number of “likes,” “favorites,” “dislikes,” and user comments was low (mdn range = 0–3, interquartile (IQR) range = 0–16) across all sources of media. Conclusion Study findings suggest that COPD education via YouTube has the potential to reach and inform patients, however, existing video content and quality varies significantly. Future interventions should help direct individuals with COPD to increase their engagement with high-quality patient education videos on YouTube that are posted by reputable health organizations and qualified medical professionals. Patients should be educated to avoid and/or critically view low-quality videos posted by individual YouTube users who are not health professionals. PMID:24659212
Performance analysis of medical video streaming over mobile WiMAX.
Alinejad, Ali; Philip, N; Istepanian, R H
2010-01-01
Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics.
Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Spatiotemporal video deinterlacing using control grid interpolation
NASA Astrophysics Data System (ADS)
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
NASA Astrophysics Data System (ADS)
Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin
2017-07-01
This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.
NASA Astrophysics Data System (ADS)
Qiu, Guoping; Kheiri, Ahmed
2011-01-01
Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Quality metric for spherical panoramic video
NASA Astrophysics Data System (ADS)
Zakharchenko, Vladyslav; Choi, Kwang Pyo; Park, Jeong Hoon
2016-09-01
Virtual reality (VR)/ augmented reality (AR) applications allow users to view artificial content of a surrounding space simulating presence effect with a help of special applications or devices. Synthetic contents production is well known process form computer graphics domain and pipeline has been already fixed in the industry. However emerging multimedia formats for immersive entertainment applications such as free-viewpoint television (FTV) or spherical panoramic video require different approaches in content management and quality assessment. The international standardization on FTV has been promoted by MPEG. This paper is dedicated to discussion of immersive media distribution format and quality estimation process. Accuracy and reliability of the proposed objective quality estimation method had been verified with spherical panoramic images demonstrating good correlation results with subjective quality estimation held by a group of experts.
Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.
2015-01-01
Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.
Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria
2015-01-01
Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence. PMID:26029135
Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria
2015-01-01
Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
NASA Astrophysics Data System (ADS)
Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik
2015-06-01
As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.
NASA Astrophysics Data System (ADS)
Pandremmenou, K.; Shahid, M.; Kondi, L. P.; Lövström, B.
2015-03-01
In this work, we propose a No-Reference (NR) bitstream-based model for predicting the quality of H.264/AVC video sequences, affected by both compression artifacts and transmission impairments. The proposed model is based on a feature extraction procedure, where a large number of features are calculated from the packet-loss impaired bitstream. Many of the features are firstly proposed in this work, and the specific set of the features as a whole is applied for the first time for making NR video quality predictions. All feature observations are taken as input to the Least Absolute Shrinkage and Selection Operator (LASSO) regression method. LASSO indicates the most important features, and using only them, it is possible to estimate the Mean Opinion Score (MOS) with high accuracy. Indicatively, we point out that only 13 features are able to produce a Pearson Correlation Coefficient of 0.92 with the MOS. Interestingly, the performance statistics we computed in order to assess our method for predicting the Structural Similarity Index and the Video Quality Metric are equally good. Thus, the obtained experimental results verified the suitability of the features selected by LASSO as well as the ability of LASSO in making accurate predictions through sparse modeling.
Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David
2017-09-01
This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.
Assessment of "YouTube" Content for Distal Radius Fracture Immobilization.
Addar, Abdullah; Marwan, Yousef; Algarni, Nizar; Berry, Gregory
Distal radius fractures (DRFs) are the most common orthopedic fractures, with >70% of cases treated by closed immobilization using a short arm cast or a sugar tong splint. However, inadequate immobilization is a risk factor for loss of reduction requiring repeat reduction or surgical treatment. Therefore, education of clinical skills for appropriate immobilization of DRFs is important. With the increasing use of web-based information by medical learners, our aim was to assess the quality and quantity of videos regarding closed immobilization of DRFs on YouTube. Retrospective review of YouTube videos on distal radius fracture immobilization using specific search terms. Identified videos were analyzed for their educational value, quality of the technical skill demonstrated, and overall metrics. Educational value was scored on a 5-point scale, with "1" indicative of low quality and "5" of high quality. Not applicable. Among the 68,366 videos identified, 16 met our inclusion criteria of being in English; performed by a health care professional or institution; and with casting being the major theme of the educational information provided. Of these 16 videos, 6 had an educational value score of 4 or 5, with the remaining 10 having a score ≤3. Although immobilization was demonstrated by cast technician specialized in orthopedics, skills were also performed by orthopedic attendants, urgent care physicians, orthopedic residents, and nurse practitioners. The credentials of the performer in 3 videos were not identified. There is a need to promote high-quality educational videos produced by established medical school faculty members on open, web-based, portals. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2014-05-01
Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
A new metric to assess temporal coherence for video retargeting
NASA Astrophysics Data System (ADS)
Li, Ke; Yan, Bo; Yuan, Binhang
2014-10-01
In video retargeting, how to assess the performance in maintaining temporal coherence has become the prominent challenge. In this paper, we will present a new objective measurement to assess temporal coherence after video retargeting. It's a general metric to assess jittery artifact for both discrete and continuous video retargeting methods, the accuracy of which is verified by psycho-visual tests. As a result, our proposed assessment method possesses huge practical significance.
NASA Astrophysics Data System (ADS)
Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.
2008-12-01
Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.
Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models
NASA Astrophysics Data System (ADS)
Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny
2016-09-01
Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.
Quality metrics for sensor images
NASA Technical Reports Server (NTRS)
Ahumada, AL
1993-01-01
Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.
Cultural Heritage Reconstruction from Historical Photographs and Videos
NASA Astrophysics Data System (ADS)
Condorelli, F.; Rinaudo, F.
2018-05-01
Historical archives save invaluable treasures and play a critical role in the conservation of Cultural Heritage. Old photographs and videos, which have survived over time and stored in these archives, preserve traces of architecture and urban transformation and, in many cases, are the only evidence of buildings that no longer exist. They are a precious source of enormous informative potential in Cultural Heritage documentation and save invaluable treasures. Thanks to photogrammetric techniques it is possible to extract metric information from these sources useful for 3D virtual reconstructions of monuments and historic buildings. This paper explores the ways to search for, classify and group historical data by considering their possible use in metric documentation and aims to provide an overview of criticality and open issues of the methodologies that could be used to process these data. A practical example is described and presented as a case study. The video "Torino 1928", an old movie dating from the 1930s, was processed for reconstructing the temporary pavilions of the "Exposition" held in Turin in 1928. Despite the initial concerns relating to processing this kind of data, the experimental methodology used in this research has allowed to reach a quality of results of acceptable standard.
HealthRecSys: A semantic content-based recommender system to complement health videos.
Sanchez Bocanegra, Carlos Luis; Sevillano Ramos, Jose Luis; Rizo, Carlos; Civit, Anton; Fernandez-Luque, Luis
2017-05-15
The Internet, and its popularity, continues to grow at an unprecedented pace. Watching videos online is very popular; it is estimated that 500 h of video are uploaded onto YouTube, a video-sharing service, every minute and that, by 2019, video formats will comprise more than 80% of Internet traffic. Health-related videos are very popular on YouTube, but their quality is always a matter of concern. One approach to enhancing the quality of online videos is to provide additional educational health content, such as websites, to support health consumers. This study investigates the feasibility of building a content-based recommender system that links health consumers to reputable health educational websites from MedlinePlus for a given health video from YouTube. The dataset for this study includes a collection of health-related videos and their available metadata. Semantic technologies (such as SNOMED-CT and Bio-ontology) were used to recommend health websites from MedlinePlus. A total of 26 healths professionals participated in evaluating 253 recommended links for a total of 53 videos about general health, hypertension, or diabetes. The relevance of the recommended health websites from MedlinePlus to the videos was measured using information retrieval metrics such as the normalized discounted cumulative gain and precision at K. The majority of websites recommended by our system for health videos were relevant, based on ratings by health professionals. The normalized discounted cumulative gain was between 46% and 90% for the different topics. Our study demonstrates the feasibility of using a semantic content-based recommender system to enrich YouTube health videos. Evaluation with end-users, in addition to healthcare professionals, will be required to identify the acceptance of these recommendations in a nonsimulated information-seeking context.
Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness
Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.
2015-01-01
A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057
On use of image quality metrics for perceptual blur modeling: image/video compression case
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn
2018-02-01
Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.
Video quality assessment based on correlation between spatiotemporal motion energies
NASA Astrophysics Data System (ADS)
Yan, Peng; Mou, Xuanqin
2016-09-01
Video quality assessment (VQA) has been a hot research topic because of rapid increase of huge demand of video communications. From the earliest PSNR metric to advanced models that are perceptual aware, researchers have made great progress in this field by introducing properties of human vision system (HVS) into VQA model design. Among various algorithms that model the property of HVS perceiving motion, the spatiotemporal energy model has been validated to be high consistent with psychophysical experiments. In this paper, we take the spatiotemporal energy model into VQA model design by the following steps. 1) According to the pristine spatiotemporal energy model proposed by Adelson et al, we apply the linear filters, which are oriented in space-time and tuned in spatial frequency, to filter the reference and test videos respectively. The outputs of quadrature pairs of above filters are then squared and summed to give two measures of motion energy, which are named rightward and leftward energy responses, respectively. 2) Based on the pristine model, we calculate summation of the rightward and leftward energy responses as spatiotemporal features to represent perceptual quality information for videos, named total spatiotemporal motion energy maps. 3) The proposed FR-VQA model, named STME, is calculated with statistics based on the pixel-wise correlation between the total spatiotemporal motion energy maps of the reference and distorted videos. The STME model was validated on the LIVE VQA Database by comparing with existing FR-VQA models. Experimental results show that STME performs with excellent prediction accuracy and stays in state-of-the-art VQA models.
Video-based face recognition via convolutional neural networks
NASA Astrophysics Data System (ADS)
Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming
2017-06-01
Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.
Subjective evaluation of H.265/HEVC based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2015-02-01
The Dynamic Adaptive Streaming over HTTP (DASH) standard is becoming increasingly popular for real-time adaptive HTTP streaming of internet video in response to unstable network conditions. Integration of DASH streaming techniques with the new H.265/HEVC video coding standard is a promising area of research. The performance of HEVC-DASH systems has been previously evaluated by a few researchers using objective metrics, however subjective evaluation would provide a better measure of the user's Quality of Experience (QoE) and overall performance of the system. This paper presents a subjective evaluation of an HEVC-DASH system implemented in a hardware testbed. Previous studies in this area have focused on using the current H.264/AVC (Advanced Video Coding) or H.264/SVC (Scalable Video Coding) codecs and moreover, there has been no established standard test procedure for the subjective evaluation of DASH adaptive streaming. In this paper, we define a test plan for HEVC-DASH with a carefully justified data set employing longer video sequences that would be sufficient to demonstrate the bitrate switching operations in response to various network condition patterns. We evaluate the end user's real-time QoE online by investigating the perceived impact of delay, different packet loss rates, fluctuating bandwidth, and the perceived quality of using different DASH video stream segment sizes on a video streaming session using different video sequences. The Mean Opinion Score (MOS) results give an insight into the performance of the system and expectation of the users. The results from this study show the impact of different network impairments and different video segments on users' QoE and further analysis and study may help in optimizing system performance.
NASA Astrophysics Data System (ADS)
Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter
2002-06-01
In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.
NASA Astrophysics Data System (ADS)
Anderson, Monica; David, Phillip
2007-04-01
Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.
Phase-based motion magnification video for monitoring of vital signals using the Hermite transform
NASA Astrophysics Data System (ADS)
Brieva, Jorge; Moya-Albor, Ernesto
2017-11-01
In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.
On mobile wireless ad hoc IP video transports
NASA Astrophysics Data System (ADS)
Kazantzidis, Matheos
2006-05-01
Multimedia transports in wireless, ad-hoc, multi-hop or mobile networks must be capable of obtaining information about the network and adaptively tune sending and encoding parameters to the network response. Obtaining meaningful metrics to guide a stable congestion control mechanism in the transport (i.e. passive, simple, end-to-end and network technology independent) is a complex problem. Equally difficult is obtaining a reliable QoS metrics that agrees with user perception in a client/server or distributed environment. Existing metrics, objective or subjective, are commonly used after or before to test or report on a transmission and require access to both original and transmitted frames. In this paper, we propose that an efficient and successful video delivery and the optimization of overall network QoS requires innovation in a) a direct measurement of available and bottleneck capacity for its congestion control and b) a meaningful subjective QoS metric that is dynamically reported to video sender. Once these are in place, a binomial -stable, fair and TCP friendly- algorithm can be used to determine the sending rate and other packet video parameters. An adaptive mpeg codec can then continually test and fit its parameters and temporal-spatial data-error control balance using the perceived QoS dynamic feedback. We suggest a new measurement based on a packet dispersion technique that is independent of underlying network mechanisms. We then present a binomial control based on direct measurements. We implement a QoS metric that is known to agree with user perception (MPQM) in a client/server, distributed environment by using predetermined table lookups and characterization of video content.
Evaluation schemes for video and image anomaly detection algorithms
NASA Astrophysics Data System (ADS)
Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael
2016-05-01
Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.
Lewis, Melissa L; Weber, René; Bowman, Nicholas David
2008-08-01
This paper proposes a new and reliable metric for measuring character attachment (CA), the connection felt by a video game player toward a video game character. Results of construct validity analyses indicate that the proposed CA scale has a significant relationship with self-esteem, addiction, game enjoyment, and time spent playing games; all of these relationships are predicted by theory. Additionally, CA levels for role-playing games differ significantly from CA levels of other character-driven games.
The Endockscope Using Next Generation Smartphones: "A Global Opportunity".
Tse, Christina; Patel, Roshan M; Yoon, Renai; Okhunov, Zhamshid; Landman, Jaime; Clayman, Ralph V
2018-06-02
The Endockscope combines a smartphone, a battery powered flashlight and a fiberoptic cystoscope allowing for mobile videocystoscopy. We compared conventional videocystoscopy to the Endockscope paired with next generation smartphones in an ex-vivo porcine bladder model to evaluate its image quality. The Endockscope consists of a three-dimensional (3D) printed attachment that connects a smartphone to a flexible fiberoptic cystoscope plus a 1000 lumen light-emitting diode (LED) cordless light source. Video recordings of porcine cystoscopy with a fiberoptic flexible cystoscope (Storz) were captured for each mobile device (iPhone 6, iPhone 6S, iPhone 7, Samsung S8, and Google Pixel) and for the high-definition H3-Z versatile camera (HD) set-up with both the LED light source and the xenon light (XL) source. Eleven faculty urologists, blinded to the modality used, evaluated each video for image quality/resolution, brightness, color quality, sharpness, overall quality, and acceptability for diagnostic use. When comparing the Endockscope coupled to an Galaxy S8, iPhone 7, and iPhone 6S with the LED portable light source to the HD camera with XL, there were no statistically significant differences in any metric. 82% and 55% of evaluators considered the iPhone 7 + LED light source and iPhone 6S + LED light, respectively, appropriate for diagnostic purposes as compared to 100% who considered the HD camera with XL appropriate. The iPhone 6 and Google Pixel coupled with the LED source were both inferior to the HD camera with XL in all metrics. The Endockscope system with a LED light source when coupled with either an iPhone 7 or Samsung S8 (total cost: $750) is comparable to conventional videocystoscopy with a standard camera and XL light source (total cost: $45,000).
Consumer-based technology for distribution of surgical videos for objective evaluation.
Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K
2012-08-01
The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.
MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
Wave Energy Prize - 1/20th Testing - AquaHarmonics
Scharmen, Wesley
2016-09-02
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the AquaHarmonics team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
Wave Energy Prize - 1/20th Testing - Waveswing America
Scharmen, Wesley
2016-08-19
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the Waveswing America team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
Wave Energy Prize - 1/20th Testing - M3 Wave
Wesley Scharmen
2016-08-12
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the M3 Wave team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
Wave Energy Prize - 1/20th Testing - Sea Potential
Scharmen, Wesley
2016-09-23
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the Sea Potential team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
Wave Energy Prize - 1/20th Testing - Oscilla Power
Scharmen, Wesley
2016-09-16
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the Oscilla Power team, including the 1/20th Test Plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the WEPrize winners.
Wave Energy Prize - 1/20th Testing - RTI Wave Power
Scharmen, Wesley
2016-09-30
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the RTI Wave Power team, including the 1/20th Test Plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
Wave Energy Prize - 1/20th Testing - Harvest Wave Energy
Wesley Scharmen
2016-08-26
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the Harvest Wave Energy team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Quality models for audiovisual streaming
NASA Astrophysics Data System (ADS)
Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man
2006-01-01
Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.
NASA Technical Reports Server (NTRS)
1998-01-01
BioMetric Systems has an exclusive license to the Posture Video Analysis Tool (PVAT) developed at Johnson Space Center. PVAT uses videos from Space Shuttle flights to identify limiting posture and other human factors in the workplace that could be limiting. The software also provides data that recommends appropriate postures for certain tasks and safe duration for potentially harmful positions. BioMetric Systems has further developed PVAT for use by hospitals, physical rehabilitation facilities, insurance companies, sports medicine clinics, oil companies, manufacturers, and the military.
MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530
Wave Energy Prize - 1/20th Testing - CalWave Power Technologies
Scharmen, Wesley
2016-09-09
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the CalWave Power Technologies team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners.
Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J
2013-03-01
The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.
An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.
K, Manasa; Channappayya, Sumohana S
2016-06-01
We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.
The importance of expert feedback during endovascular simulator training.
Boyle, Emily; O'Keeffe, Dara A; Naughton, Peter A; Hill, Arnold D K; McDonnell, Ciaran O; Moneley, Daragh
2011-07-01
Complex endovascular skills are difficult to obtain in the clinical environment. Virtual reality (VR) simulator training is a valuable addition to current training curricula, but is there a benefit in the absence of expert trainers? Eighteen endovascular novices performed a renal artery angioplasty/stenting (RAS) on the Vascular Interventional Surgical Trainer simulator. They were randomized into three groups: Group A (n = 6, control), no performance feedback; Group B (n = 6, nonexpert feedback), feedback after every procedure from a nonexpert facilitator; and Group C (n = 6, expert feedback), feedback after every procedure from a consultant vascular surgeon. Each trainee completed RAS six times. Simulator-measured performance metrics included procedural and fluoroscopy time, contrast volume, accuracy of balloon placement, and handling errors. Clinical errors were also measured by blinded video assessment. Data were analyzed using SPSS version 15. A clear learning curve was observed across the six trials. There were no significant differences between the three groups for the general performance metrics, but Group C made fewer errors than Groups A (P = .009) or B (P = .004). Video-based error assessment showed that Groups B and C performed better than Group A (P = .002 and P = .000, respectively). VR simulator training for novices can significantly improve general performance in the absence of expert trainers. Procedure-specific qualitative metrics are improved with expert feedback, but nonexpert facilitators can also enhance the quality of training and may represent a valuable alternative to expert clinical faculty. Copyright © 2011 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
Effect of Playing Video Games on Laparoscopic Skills Performance: A Systematic Review.
Glassman, Daniel; Yiasemidou, Marina; Ishii, Hiro; Somani, Bhaskar Kumar; Ahmed, Kamran; Biyani, Chandra Shekhar
2016-02-01
The advances in both video games and minimally invasive surgery have allowed many to consider the potential positive relationship between the two. This review aims to evaluate outcomes of studies that investigated the correlation between video game skills and performance in laparoscopic surgery. A systematic search was conducted on PubMed/Medline and EMBASE databases for the MeSH terms and keywords including "video games and laparoscopy," "computer games and laparoscopy," "Xbox and laparoscopy," "Nintendo Wii and laparoscopy," and "PlayStation and laparoscopy." Cohort, case reports, letters, editorials, bulletins, and reviews were excluded. Studies in English, with task performance as primary outcome, were included. The search period for this review was 1950 to December 2014. There were 57 abstracts identified: 4 of these were found to be duplicates; 32 were found to be nonrelevant to the research question. Overall, 21 full texts were assessed; 15 were excluded according to the Medical Education Research Study Quality Instrument quality assessment criteria. The five studies included in this review were randomized controlled trials. Playing video games was found to reduce error in two studies (P 0.002 and P 0.045). For the same studies, however, several other metrics assessed were not significantly different between the control and intervention group. One study showed a decrease in the time for the group that played video games (P 0.037) for one of two laparoscopic tasks performed. In the same study, however, when the groups were reversed (initial control group became intervention and vice versa), a difference was not demonstrated (P for peg transfer 1 - 0.465, P for cobra robe - 0.185). Finally, two further studies found no statistical difference between the game playing group and the control group's performance. There is a very limited amount of evidence to support that the use of video games enhances surgical simulation performance.
Feature Quantization and Pooling for Videos
2014-05-01
does not score high on this metric. The exceptions are videos where objects move - for exam- ple, the ice skaters (“ice”) and the tennis player , tracked...convincing me that my future path should include a PhD. Martial and Fernando, your energy is exceptional! Its influence can be seen in the burning...3.17 BMW enables Interpretation of similar regions across videos ( tennis ). . . . . . . 50 3.18 Common Motion Words across videos with large camera
Does video gaming affect orthopaedic skills acquisition? A prospective cohort-study.
Khatri, Chetan; Sugand, Kapil; Anjum, Sharika; Vivekanantham, Sayinthen; Akhtar, Kash; Gupte, Chinmay
2014-01-01
Previous studies have suggested that there is a positive correlation between the extent of video gaming and efficiency of surgical skill acquisition on laparoscopic and endovascular surgical simulators amongst trainees. However, the link between video gaming and orthopaedic trauma simulation remains unexamined, in particular dynamic hip screw (DHS) stimulation. To assess effect of prior video gaming experience on virtual-reality (VR) haptic-enabled DHS simulator performance. 38 medical students, naïve to VR surgical simulation, were recruited and stratified relative to their video gaming exposure. Group 1 (n = 19, video-gamers) were defined as those who play more than one hour per day in the last calendar year. Group 2 (n = 19, non-gamers) were defined as those who play video games less than one hour per calendar year. Both cohorts performed five attempts on completing a VR DHS procedure and repeated the task after a week. Metrics assessed included time taken for task, simulated flouroscopy time and screw position. Median and Bonett-Price 95% confidence intervals were calculated for seven real-time objective performance metrics. Data was confirmed as non-parametric by the Kolmogorov-Smirnov test. Analysis was performed using the Mann-Whitney U test for independent data whilst the Wilcoxon signed ranked test was used for paired data. A result was deemed significant when a two-tailed p-value was less than 0.05. All 38 subjects completed the study. The groups were not significantly different at baseline. After ten attempts, there was no difference between Group 1 and Group 2 in any of the metrics tested. These included time taken for task, simulated fluoroscopy time, number of retries, tip-apex distance, percentage cut-out and global score. Contrary to previous literature findings, there was no correlation between video gaming experience and gaining competency on a VR DHS simulator.
Video redaction: a survey and comparison of enabling technologies
NASA Astrophysics Data System (ADS)
Sah, Shagan; Shringi, Ameya; Ptucha, Raymond; Burry, Aaron; Loce, Robert
2017-09-01
With the prevalence of video recordings from smart phones, dash cams, body cams, and conventional surveillance cameras, privacy protection has become a major concern, especially in light of legislation such as the Freedom of Information Act. Video redaction is used to obfuscate sensitive and personally identifiable information. Today's typical workflow involves simple detection, tracking, and manual intervention. Automated methods rely on accurate detection mechanisms being paired with robust tracking methods across the video sequence to ensure the redaction of all sensitive information while minimizing spurious obfuscations. Recent studies have explored the use of convolution neural networks and recurrent neural networks for object detection and tracking. The present paper reviews the redaction problem and compares a few state-of-the-art detection, tracking, and obfuscation methods as they relate to redaction. The comparison introduces an evaluation metric that is specific to video redaction performance. The metric can be evaluated in a manner that allows balancing the penalty for false negatives and false positives according to the needs of particular application, thereby assisting in the selection of component methods and their associated hyperparameters such that the redacted video has fewer frames that require manual review.
Adaptive metric learning with deep neural networks for video-based facial expression recognition
NASA Astrophysics Data System (ADS)
Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping
2018-01-01
Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.
NASA Technical Reports Server (NTRS)
Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer
2011-01-01
Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed.
Rudmik, Luke; Mattos, Jose; Schneider, John; Manes, Peter R; Stokken, Janalee K; Lee, Jivianne; Higgins, Thomas S; Schlosser, Rodney J; Reh, Douglas D; Setzen, Michael; Soler, Zachary M
2017-09-01
Measuring quality outcomes is an important prerequisite to improve quality of care. Rhinosinusitis represents a high value target to improve quality of care because it has a high prevalence of disease, large economic burden, and large practice variation. In this study we review the current state of quality measurement for management of both acute (ARS) and chronic rhinosinusitis (CRS). The major national quality metric repositories and clearinghouses were queried. Additional searches included the American Academy of Otolaryngology-Head and Neck Surgery database, PubMed, and Google to attempt to capture any additional quality metrics. Seven quality metrics for ARS and 4 quality metrics for CRS were identified. ARS metrics focused on appropriateness of diagnosis (n = 1), antibiotic prescribing (n = 4), and radiologic imaging (n = 2). CRS quality metrics focused on appropriateness of diagnosis (n = 1), radiologic imaging (n = 1), and measurement of patient quality of life (n = 2). The Physician Quality Reporting System (PQRS) currently tracks 3 ARS quality metrics and 1 CRS quality metric. There are no outcome-based rhinosinusitis quality metrics and no metrics that assess domains of safety, patient-centeredness, and timeliness of care. The current status of quality measurement for rhinosinusitis has focused primarily on the quality domain of efficiency and process measures for ARS. More work is needed to develop, validate, and track outcome-based quality metrics along with CRS-specific metrics. Although there has been excellent work done to improve quality measurement for rhinosinusitis, there remain major gaps and challenges that need to be considered during the development of future metrics. © 2017 ARS-AAOA, LLC.
Image and Video Quality Assessment Using LCD: Comparisons with CRT Conditions
NASA Astrophysics Data System (ADS)
Tourancheau, Sylvain; Callet, Patrick Le; Barba, Dominique
In this paper, the impact of display on quality assessment is addressed. Subjective quality assessment experiments have been performed on both LCD and CRT displays. Two sets of still images and two sets of moving pictures have been assessed using either an ACR or a SAMVIQ protocol. Altogether, eight experiments have been led. Results are presented and discussed, some differences are pointed out. Concerning moving pictures, these differences seem to be mainly due to LCD moving artefacts such as motion blur. LCD motion blur has been measured objectively and with psycho-physics experiments. A motion-blur metric based on the temporal characteristics of LCD can be defined. A prediction model have been then designed which predict the differences of perceived quality between CRT and LCD. This motion-blur-based model enables the estimation of perceived quality on LCD with respect to the perceived quality on CRT. Technical solutions to LCD motion blur can thus be evaluated on natural contents by this mean.
In-network adaptation of SHVC video in software-defined networks
NASA Astrophysics Data System (ADS)
Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos
2016-04-01
Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and evaluated on an SDN allowing us to provide important benchmarks for video streaming over SDN and for SDN control plane latency.
Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery
Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack
2015-01-01
Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
Image quality assessment for teledermatology: from consumer devices to a dedicated medical device
NASA Astrophysics Data System (ADS)
Amouroux, Marine; Le Cunff, Sébastien; Haudrechy, Alexandre; Blondel, Walter
2017-03-01
Aging population as well as growing incidence of type 2 diabetes induce a growing incidence of chronic skin disorders. In the meantime, chronic shortage of dermatologists leaves some areas underserved. Remote triage and assistance to homecare nurses (known as "teledermatology") appear to be promising solutions to provide dermatological valuation in a decent time to patients wherever they live. Nowadays, teledermatology is often based on consumer devices (digital tablets, smartphones, webcams) whose photobiological and electrical safety levels do not match with medical devices' levels. The American Telemedicine Association (ATA) has published recommendations on quality standards for teledermatology. This "quick guide" does not address the issue of image quality which is critical in domestic environments where lighting is rarely reproducible. Standardized approaches of image quality would allow clinical trial comparison, calibration, manufacturing quality control and quality insurance during clinical use. Therefore, we defined several critical metrics using calibration charts (color and resolution charts) in order to assess image quality such as resolution, lighting uniformity, color repeatability and discrimination of key couples of colors. Using such metrics, we compared quality of images produced by several medical devices (handheld and video-dermoscopes) as well as by consumer devices (digital tablet and cameras) widely spread among dermatologists practice. Since diagnosis accuracy may be impaired by "low quality-images", this study highlights that, from an optical point of view, teledermatology should only be performed using medical devices. Furthermore, a dedicated medical device should probably be developed for the time follow-up of skin lesions often managed in teledermatology such as chronic wounds that require i) noncontact imaging of ii) large areas of skin surfaces, both criteria that cannot be matched using dermoscopes.
HOPE: An On-Line Piloted Handling Qualities Experiment Data Book
NASA Technical Reports Server (NTRS)
Jackson, E. B.; Proffitt, Melissa S.
2010-01-01
A novel on-line database for capturing most of the information obtained during piloted handling qualities experiments (either flight or simulated) is described. The Hyperlinked Overview of Piloted Evaluations (HOPE) web application is based on an open-source object-oriented Web-based front end (Ruby-on-Rails) that can be used with a variety of back-end relational database engines. The hyperlinked, on-line data book approach allows an easily-traversed way of looking at a variety of collected data, including pilot ratings, pilot information, vehicle and configuration characteristics, test maneuvers, and individual flight test cards and repeat runs. It allows for on-line retrieval of pilot comments, both audio and transcribed, as well as time history data retrieval and video playback. Pilot questionnaires are recorded as are pilot biographies. Simple statistics are calculated for each selected group of pilot ratings, allowing multiple ways to aggregate the data set (by pilot, by task, or by vehicle configuration, for example). Any number of per-run or per-task metrics can be captured in the database. The entire run metrics dataset can be downloaded in comma-separated text for further analysis off-line. It is expected that this tool will be made available upon request
Quality Metrics in Neonatal and Pediatric Critical Care Transport: A National Delphi Project.
Schwartz, Hamilton P; Bigham, Michael T; Schoettker, Pamela J; Meyer, Keith; Trautman, Michael S; Insoft, Robert M
2015-10-01
The transport of neonatal and pediatric patients to tertiary care facilities for specialized care demands monitoring the quality of care delivered during transport and its impact on patient outcomes. In 2011, pediatric transport teams in Ohio met to identify quality indicators permitting comparisons among programs. However, no set of national consensus quality metrics exists for benchmarking transport teams. The aim of this project was to achieve national consensus on appropriate neonatal and pediatric transport quality metrics. Modified Delphi technique. The first round of consensus determination was via electronic mail survey, followed by rounds of consensus determination in-person at the American Academy of Pediatrics Section on Transport Medicine's 2012 Quality Metrics Summit. All attendees of the American Academy of Pediatrics Section on Transport Medicine Quality Metrics Summit, conducted on October 21-23, 2012, in New Orleans, LA, were eligible to participate. Candidate quality metrics were identified through literature review and those metrics currently tracked by participating programs. Participants were asked in a series of rounds to identify "very important" quality metrics for transport. It was determined a priori that consensus on a metric's importance was achieved when at least 70% of respondents were in agreement. This is consistent with other Delphi studies. Eighty-two candidate metrics were considered initially. Ultimately, 12 metrics achieved consensus as "very important" to transport. These include metrics related to airway management, team mobilization time, patient and crew injuries, and adverse patient care events. Definitions were assigned to the 12 metrics to facilitate uniform data tracking among programs. The authors succeeded in achieving consensus among a diverse group of national transport experts on 12 core neonatal and pediatric transport quality metrics. We propose that transport teams across the country use these metrics to benchmark and guide their quality improvement activities.
Surgical instrument similarity metrics and tray analysis for multi-sensor instrument identification
NASA Astrophysics Data System (ADS)
Glaser, Bernhard; Schellenberg, Tobias; Franke, Stefan; Dänzer, Stefan; Neumuth, Thomas
2015-03-01
A robust identification of the instrument currently used by the surgeon is crucial for the automatic modeling and analysis of surgical procedures. Various approaches for intra-operative surgical instrument identification have been presented, mostly based on radio-frequency identification (RFID) or endoscopic video analysis. A novel approach is to identify the instruments on the instrument table of the scrub nurse with a combination of video and weight information. In a previous article, we successfully followed this approach and applied it to multiple instances of an ear, nose and throat (ENT) procedure and the surgical tray used therein. In this article, we present a metric for the suitability of the instruments of a surgical tray for identification by video and weight analysis and apply it to twelve trays of four different surgical domains (abdominal surgery, neurosurgery, orthopedics and urology). The used trays were digitized at the central sterile services department of the hospital. The results illustrate that surgical trays differ in their suitability for the approach. In general, additional weight information can significantly contribute to the successful identification of surgical instruments. Additionally, for ten different surgical instruments, ten exemplars of each instrument were tested for their weight differences. The samples indicate high weight variability in instruments with identical brand and model number. The results present a new metric for approaches aiming towards intra-operative surgical instrument detection and imply consequences for algorithms exploiting video and weight information for identification purposes.
A large-scale video codec comparison of x264, x265 and libvpx for practical VOD applications
NASA Astrophysics Data System (ADS)
De Cock, Jan; Mavlankar, Aditya; Moorthy, Anush; Aaron, Anne
2016-09-01
Over the last years, we have seen exciting improvements in video compression technology, due to the introduction of HEVC and royalty-free coding specifications such as VP9. The potential compression gains of HEVC over H.264/AVC have been demonstrated in different studies, and are usually based on the HM reference software. For VP9, substantial gains over H.264/AVC have been reported in some publications, whereas others reported less optimistic results. Differences in configurations between these publications make it more difficult to assess the true potential of VP9. Practical open-source encoder implementations such as x265 and libvpx (VP9) have matured, and are now showing high compression gains over x264. In this paper, we demonstrate the potential of these encoder imple- mentations, with settings optimized for non-real-time random access, as used in a video-on-demand encoding pipeline. We report results from a large-scale video codec comparison test, which includes x264, x265 and libvpx. A test set consisting of a variety of titles with varying spatio-temporal characteristics from our catalog is used, resulting in tens of millions of encoded frames, hence larger than test sets previously used in the literature. Re- sults are reported in terms of PSNR, SSIM, MS-SSIM, VIF and the recently introduced VMAF quality metric. BD-rate calculations show that using x265 and libvpx vs. x264 can lead to significant bitrate savings for the same quality. x265 outperforms libvpx in most cases, but the performance gap narrows (or even reverses) at the higher resolutions.
Verification testing of the compression performance of the HEVC screen content coding extensions
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
Gopalakrishnan, Ravichandran C; Karunakaran, Manivannan
2014-01-01
Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.
Software metrics: Software quality metrics for distributed systems. [reliability engineering
NASA Technical Reports Server (NTRS)
Post, J. V.
1981-01-01
Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.
Duncan, James R; Kline, Benjamin; Glaiberman, Craig B
2007-04-01
To create and test methods of extracting efficiency data from recordings of simulated renal stent procedures. Task analysis was performed and used to design a standardized testing protocol. Five experienced angiographers then performed 16 renal stent simulations using the Simbionix AngioMentor angiographic simulator. Audio and video recordings of these simulations were captured from multiple vantage points. The recordings were synchronized and compiled. A series of efficiency metrics (procedure time, contrast volume, and tool use) were then extracted from the recordings. The intraobserver and interobserver variability of these individual metrics was also assessed. The metrics were converted to costs and aggregated to determine the fixed and variable costs of a procedure segment or the entire procedure. Task analysis and pilot testing led to a standardized testing protocol suitable for performance assessment. Task analysis also identified seven checkpoints that divided the renal stent simulations into six segments. Efficiency metrics for these different segments were extracted from the recordings and showed excellent intra- and interobserver correlations. Analysis of the individual and aggregated efficiency metrics demonstrated large differences between segments as well as between different angiographers. These differences persisted when efficiency was expressed as either total or variable costs. Task analysis facilitated both protocol development and data analysis. Efficiency metrics were readily extracted from recordings of simulated procedures. Aggregating the metrics and dividing the procedure into segments revealed potential insights that could be easily overlooked because the simulator currently does not attempt to aggregate the metrics and only provides data derived from the entire procedure. The data indicate that analysis of simulated angiographic procedures will be a powerful method of assessing performance in interventional radiology.
Video quality assesment using M-SVD
NASA Astrophysics Data System (ADS)
Tao, Peining; Eskicioglu, Ahmet M.
2007-01-01
Objective video quality measurement is a challenging problem in a variety of video processing application ranging from lossy compression to printing. An ideal video quality measure should be able to mimic the human observer. We present a new video quality measure, M-SVD, to evaluate distorted video sequences based on singular value decomposition. A computationally efficient approach is developed for full-reference (FR) video quality assessment. This measure is tested on the Video Quality Experts Group (VQEG) phase I FR-TV test data set. Our experiments show the graphical measure displays the amount of distortion as well as the distribution of error in all frames of the video sequence while the numerical measure has a good correlation with perceived video quality outperforms PSNR and other objective measures by a clear margin.
Impulsive noise removal from color video with morphological filtering
NASA Astrophysics Data System (ADS)
Ruchay, Alexey; Kober, Vitaly
2017-09-01
This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.
Establishing Quantitative Software Metrics in Department of the Navy Programs
2016-04-01
13 Quality to Metrics Dependency Matrix...11 7. Quality characteristics to metrics dependecy matrix...In accomplishing this goal, a need exists for a formalized set of software quality metrics . This document establishes the validity of those necessary
Video Compression Study: h.265 vs h.264
NASA Technical Reports Server (NTRS)
Pryor, Jonathan
2016-01-01
H.265 video compression (also known as High Efficiency Video Encoding (HEVC)) promises to provide double the video quality at half the bandwidth, or the same quality at half the bandwidth of h.264 video compression [1]. This study uses a Tektronix PQA500 to determine the video quality gains by using h.265 encoding. This study also compares two video encoders to see how different implementations of h.264 and h.265 impact video quality at various bandwidths.
Perceptual tools for quality-aware video networks
NASA Astrophysics Data System (ADS)
Bovik, A. C.
2014-01-01
Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."
Video conference quality assessment based on cooperative sensing of video and audio
NASA Astrophysics Data System (ADS)
Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu
2015-12-01
This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.
Display device-adapted video quality-of-experience assessment
NASA Astrophysics Data System (ADS)
Rehman, Abdul; Zeng, Kai; Wang, Zhou
2015-03-01
Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.
NASA Astrophysics Data System (ADS)
Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun
2012-04-01
In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.
Visually lossless compression of digital hologram sequences
NASA Astrophysics Data System (ADS)
Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.
2010-01-01
Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.
Blind prediction of natural video quality.
Saad, Michele A; Bovik, Alan C; Charrier, Christophe
2014-03-01
We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale
Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Overview Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms—Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. Cluster Quality Metrics We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Network Clustering Algorithms Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters. PMID:27391786
Establishing Qualitative Software Metrics in Department of the Navy Programs
2015-10-29
dedicated to provide the highest quality software to its users. In doing, there is a need for a formalized set of Software Quality Metrics . The goal...of this paper is to establish the validity of those necessary Quality metrics . In our approach we collected the data of over a dozen programs...provide the necessary variable data for our formulas and tested the formulas for validity. Keywords: metrics ; software; quality I. PURPOSE Space
Jensen, Katrine; Bjerrum, Flemming; Hansen, Henrik Jessen; Petersen, René Horsleben; Pedersen, Jesper Holst; Konge, Lars
2017-06-01
The societies of thoracic surgery are working to incorporate simulation and competency-based assessment into specialty training. One challenge is the development of a simulation-based test, which can be used as an assessment tool. The study objective was to establish validity evidence for a virtual reality simulator test of a video-assisted thoracoscopic surgery (VATS) lobectomy of a right upper lobe. Participants with varying experience in VATS lobectomy were included. They were familiarized with a virtual reality simulator (LapSim ® ) and introduced to the steps of the procedure for a VATS right upper lobe lobectomy. The participants performed two VATS lobectomies on the simulator with a 5-min break between attempts. Nineteen pre-defined simulator metrics were recorded. Fifty-three participants from nine different countries were included. High internal consistency was found for the metrics with Cronbach's alpha coefficient for standardized items of 0.91. Significant test-retest reliability was found for 15 of the metrics (p-values <0.05). Significant correlations between the metrics and the participants VATS lobectomy experience were identified for seven metrics (p-values <0.001), and 10 metrics showed significant differences between novices (0 VATS lobectomies performed) and experienced surgeons (>50 VATS lobectomies performed). A pass/fail level defined as approximately one standard deviation from the mean metric scores for experienced surgeons passed none of the novices (0 % false positives) and failed four of the experienced surgeons (29 % false negatives). This study is the first to establish validity evidence for a VATS right upper lobe lobectomy virtual reality simulator test. Several simulator metrics demonstrated significant differences between novices and experienced surgeons and pass/fail criteria for the test were set with acceptable consequences. This test can be used as a first step in assessing thoracic surgery trainees' VATS lobectomy competency.
Video quality pooling adaptive to perceptual distortion severity.
Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad
2013-02-01
It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes "worst" scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.
CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.
Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka
2016-07-01
In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).
Iqbal, Sahar; Mustansar, Tazeen
2017-03-01
Sigma is a metric that quantifies the performance of a process as a rate of Defects-Per-Million opportunities. In clinical laboratories, sigma metric analysis is used to assess the performance of laboratory process system. Sigma metric is also used as a quality management strategy for a laboratory process to improve the quality by addressing the errors after identification. The aim of this study is to evaluate the errors in quality control of analytical phase of laboratory system by sigma metric. For this purpose sigma metric analysis was done for analytes using the internal and external quality control as quality indicators. Results of sigma metric analysis were used to identify the gaps and need for modification in the strategy of laboratory quality control procedure. Sigma metric was calculated for quality control program of ten clinical chemistry analytes including glucose, chloride, cholesterol, triglyceride, HDL, albumin, direct bilirubin, total bilirubin, protein and creatinine, at two control levels. To calculate the sigma metric imprecision and bias was calculated with internal and external quality control data, respectively. The minimum acceptable performance was considered as 3 sigma. Westgard sigma rules were applied to customize the quality control procedure. Sigma level was found acceptable (≥3) for glucose (L2), cholesterol, triglyceride, HDL, direct bilirubin and creatinine at both levels of control. For rest of the analytes sigma metric was found <3. The lowest value for sigma was found for chloride (1.1) at L2. The highest value of sigma was found for creatinine (10.1) at L3. HDL was found with the highest sigma values at both control levels (8.8 and 8.0 at L2 and L3, respectively). We conclude that analytes with the sigma value <3 are required strict monitoring and modification in quality control procedure. In this study application of sigma rules provided us the practical solution for improved and focused design of QC procedure.
Meaningful Assessment of Robotic Surgical Style using the Wisdom of Crowds.
Ershad, M; Rege, R; Fey, A Majewicz
2018-07-01
Quantitative assessment of surgical skills is an important aspect of surgical training; however, the proposed metrics are sometimes difficult to interpret and may not capture the stylistic characteristics that define expertise. This study proposes a methodology for evaluating the surgical skill, based on metrics associated with stylistic adjectives, and evaluates the ability of this method to differentiate expertise levels. We recruited subjects from different expertise levels to perform training tasks on a surgical simulator. A lexicon of contrasting adjective pairs, based on important skills for robotic surgery, inspired by the global evaluative assessment of robotic skills tool, was developed. To validate the use of stylistic adjectives for surgical skill assessment, posture videos of the subjects performing the task, as well as videos of the task were rated by crowd-workers. Metrics associated with each adjective were found using kinematic and physiological measurements through correlation with the crowd-sourced adjective assignment ratings. To evaluate the chosen metrics' ability in distinguishing expertise levels, two classifiers were trained and tested using these metrics. Crowd-assignment ratings for all adjectives were significantly correlated with expertise levels. The results indicate that naive Bayes classifier performs the best, with an accuracy of [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] when classifying into four, three, and two levels of expertise, respectively. The proposed method is effective at mapping understandable adjectives of expertise to the stylistic movements and physiological response of trainees.
YouTube as a Potential Training Resource for Laparoscopic Fundoplication.
Frongia, Giovanni; Mehrabi, Arianeb; Fonouni, Hamidreza; Rennert, Helga; Golriz, Mohammad; Günther, Patrick
To analyze the surgical proficiency and educational quality of YouTube videos demonstrating laparoscopic fundoplication (LF). In this cross-sectional study, a search was performed on YouTube for videos demonstrating the LF procedure. The surgical and educational proficiency was evaluated using the objective component rating scale, the educational quality rating score, and total video quality score. Statistical significance was determined by analysis of variance, receiver operating characteristic curve, and odds ratio analysis. A total of 71 videos were included in the study; 28 (39.4%) videos were evaluated as good, 23 (32.4%) were moderate, and 20 (28.2%) were poor. Good-rated videos were significantly longer (good, 22.0 ± 5.2min; moderate, 7.8 ± 0.9min; poor, 8.5 ± 1.0min; p = 0.007) and video duration was predictive of good quality (AUC, 0.672 ± 0.067; 95% CI: 0.541-0.802; p = 0.015). For good quality, the cut-off video duration was 7:42 minute. This cut-off value had a sensitivity of 67.9%, a specificity of 60.5%, and an odds ratio of 3.23 (95% CI: 1.19-8.79; p = 0.022) in predicting good quality. Videos uploaded from industrial sources and with a higher views/days online ratio had a higher objective component rating scale and total video quality score. In contrast, the likes/dislikes ratio was not predictive of video quality. Many videos showing the LF procedure have been uploaded to YouTube with varying degrees of quality. A process for filtering LF videos with high surgical and educational quality is feasible by evaluating the video duration, uploading source, and the views/days online ratio. However, alternative videos platforms aimed at professionals should also be considered for educational purposes. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
YouTube as a potential training method for laparoscopic cholecystectomy.
Lee, Jun Suh; Seo, Ho Seok; Hong, Tae Ho
2015-08-01
The purpose of this study was to analyze the educational quality of laparoscopic cholecystectomy (LC) videos accessible on YouTube, one of the most important sources of internet-based medical information. The keyword 'laparoscopic cholecystectomy' was used to search on YouTube and the first 100 videos were analyzed. Among them, 27 videos were excluded and 73 videos were included in the study. An arbitrary score system for video quality, devised from existing LC guidelines, were used to evaluate the quality of the videos. Video demographics were analyzed by the quality and source of the video. Correlation analysis was performed. When analyzed by video quality, 11 (15.1%) were evaluated as 'good', 40 (54.8%) were 'moderate', and 22 (30.1%) were 'poor', and there were no differences in length, views per day, or number of likes, dislikes, and comments. When analyzed by source, 27 (37.0%) were uploaded by primary centers, 20 (27.4%) by secondary centers, 15 (20.5%) by tertiary centers, 5 (6.8%) by academic institutions, and 6 (8.2%) by commercial institutions. The mean score of the tertiary center group (6.0 ± 2.0) was significantly higher than the secondary center group (3.9 ± 1.4, P = 0.001). The video score had no correlation with views per day or number of likes. Many LC videos are accessible on YouTube with varying quality. Videos uploaded by tertiary centers showed the highest educational value. This discrepancy in video quality was not recognized by viewers. More videos with higher quality need to be uploaded, and an active filtering process is necessary.
The data quality analyzer: A quality control program for seismic data
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.
2015-03-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
[Clinical trial data management and quality metrics system].
Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan
2015-11-01
Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry.
Quality metrics in high-dimensional data visualization: an overview and systematization.
Bertini, Enrico; Tatu, Andrada; Keim, Daniel
2011-12-01
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE
Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten
2014-07-01
Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
Prior video game utilization is associated with improved performance on a robotic skills simulator.
Harbin, Andrew C; Nadhan, Kumar S; Mooney, James H; Yu, Daohai; Kaplan, Joshua; McGinley-Hence, Nora; Kim, Andrew; Gu, Yiming; Eun, Daniel D
2017-09-01
Laparoscopic surgery and robotic surgery, two forms of minimally invasive surgery (MIS), have recently experienced a large increase in utilization. Prior studies have shown that video game experience (VGE) may be associated with improved laparoscopic surgery skills; however, similar data supporting a link between VGE and proficiency on a robotic skills simulator (RSS) are lacking. The objective of our study is to determine whether volume or timing of VGE had any impact on RSS performance. Pre-clinical medical students completed a comprehensive questionnaire detailing previous VGE across several time periods. Seventy-five subjects were ultimately evaluated in 11 training exercises on the daVinci Si Skills Simulator. RSS skill was measured by overall score, time to completion, economy of motion, average instrument collision, and improvement in Ring Walk 3 score. Using the nonparametric tests and linear regression, these metrics were analyzed for systematic differences between non-users, light, and heavy video game users based on their volume of use in each of the following four time periods: past 3 months, past year, past 3 years, and high school. Univariate analyses revealed significant differences between heavy and non-users in all five performance metrics. These trends disappeared as the period of VGE went further back. Our study showed a positive association between video game experience and robotic skills simulator performance that is stronger for more recent periods of video game use. The findings may have important implications for the evolution of robotic surgery training.
YouTube as a potential training method for laparoscopic cholecystectomy
Lee, Jun Suh; Seo, Ho Seok
2015-01-01
Purpose The purpose of this study was to analyze the educational quality of laparoscopic cholecystectomy (LC) videos accessible on YouTube, one of the most important sources of internet-based medical information. Methods The keyword 'laparoscopic cholecystectomy' was used to search on YouTube and the first 100 videos were analyzed. Among them, 27 videos were excluded and 73 videos were included in the study. An arbitrary score system for video quality, devised from existing LC guidelines, were used to evaluate the quality of the videos. Video demographics were analyzed by the quality and source of the video. Correlation analysis was performed. Results When analyzed by video quality, 11 (15.1%) were evaluated as 'good', 40 (54.8%) were 'moderate', and 22 (30.1%) were 'poor', and there were no differences in length, views per day, or number of likes, dislikes, and comments. When analyzed by source, 27 (37.0%) were uploaded by primary centers, 20 (27.4%) by secondary centers, 15 (20.5%) by tertiary centers, 5 (6.8%) by academic institutions, and 6 (8.2%) by commercial institutions. The mean score of the tertiary center group (6.0 ± 2.0) was significantly higher than the secondary center group (3.9 ± 1.4, P = 0.001). The video score had no correlation with views per day or number of likes. Conclusion Many LC videos are accessible on YouTube with varying quality. Videos uploaded by tertiary centers showed the highest educational value. This discrepancy in video quality was not recognized by viewers. More videos with higher quality need to be uploaded, and an active filtering process is necessary. PMID:26236699
Healthcare4VideoStorm: Making Smart Decisions Based on Storm Metrics.
Zhang, Weishan; Duan, Pengcheng; Chen, Xiufeng; Lu, Qinghua
2016-04-23
Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.
Quality assessment for color reproduction using a blind metric
NASA Astrophysics Data System (ADS)
Bringier, B.; Quintard, L.; Larabi, M.-C.
2007-01-01
This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
Video quality assessment using motion-compensated temporal filtering and manifold feature similarity
Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju
2017-01-01
Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489
Harrington, Cuan M; Chaitanya, Vishwa; Dicker, Patrick; Traynor, Oscar; Kavanagh, Dara O
2018-02-14
Video gaming demands elements of visual attention, hand-eye coordination and depth perception which may be contiguous with laparoscopic skill development. General video gaming has demonstrated altered cortical plasticity and improved baseline/acquisition of minimally invasive skills. The present study aimed to evaluate for skill acquisition associated with a commercially available dedicated laparoscopic video game (Underground) and its unique (laparoscopic-like) controller for the Nintendo®Wii U™ console. This single-blinded randomised controlled study was conducted with laparoscopically naive student volunteers of limited (< 3 h/week) video gaming backgrounds. Baseline laparoscopic skills were assessed using four basic tasks on the Virtual Reality (VR) simulator (LAP Mentor TM , 3D systems, Colorado, USA). Twenty participants were randomised to two groups; Group A was requested to complete 5 h of video gaming (Underground) per week and Group B to avoid gaming beyond their normal frequency. After 4 weeks participants were reassessed using the same VR tasks. Changes in simulator performances were assessed for each group and for intergroup variances using mixed model regression. Significant inter- and intragroup performances were present for the video gaming and controls across four basic tasks. The video gaming group demonstrated significant improvements in thirty-one of the metrics examined including dominant (p ≤ 0.004) and non-dominant (p < 0.050) instrument movements, pathlengths (p ≤ 0.040), time taken (p ≤ 0.021) and end score [p ≤ 0.046, (task-dependent)]. The control group demonstrated improvements in fourteen measures. The video gaming group demonstrated significant (p < 0.05) improvements compared to the control in five metrics. Despite encouraged gameplay and the console in participants' domiciles, voluntary engagement was lower than directed due to factors including: game enjoyment (33.3%), lack of available time (22.2%) and entertainment distractions (11.1%). Our work revealed significant value in training using a dedicated laparoscopic video game for acquisition of virtual laparoscopic skills. This novel serious game may provide foundations for future surgical developments on game consoles in the home environment.
NASA Technical Reports Server (NTRS)
Basili, V. R.
1981-01-01
Work on metrics is discussed. Factors that affect software quality are reviewed. Metrics is discussed in terms of criteria achievements, reliability, and fault tolerance. Subjective and objective metrics are distinguished. Product/process and cost/quality metrics are characterized and discussed.
Objectification of perceptual image quality for mobile video
NASA Astrophysics Data System (ADS)
Lee, Seon-Oh; Sim, Dong-Gyu
2011-06-01
This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale.
Emmons, Scott; Kobourov, Stephen; Gallant, Mike; Börner, Katy
2016-01-01
Notions of community quality underlie the clustering of networks. While studies surrounding network clustering are increasingly common, a precise understanding of the realtionship between different cluster quality metrics is unknown. In this paper, we examine the relationship between stand-alone cluster quality metrics and information recovery metrics through a rigorous analysis of four widely-used network clustering algorithms-Louvain, Infomap, label propagation, and smart local moving. We consider the stand-alone quality metrics of modularity, conductance, and coverage, and we consider the information recovery metrics of adjusted Rand score, normalized mutual information, and a variant of normalized mutual information used in previous work. Our study includes both synthetic graphs and empirical data sets of sizes varying from 1,000 to 1,000,000 nodes. We find significant differences among the results of the different cluster quality metrics. For example, clustering algorithms can return a value of 0.4 out of 1 on modularity but score 0 out of 1 on information recovery. We find conductance, though imperfect, to be the stand-alone quality metric that best indicates performance on the information recovery metrics. Additionally, our study shows that the variant of normalized mutual information used in previous work cannot be assumed to differ only slightly from traditional normalized mutual information. Smart local moving is the overall best performing algorithm in our study, but discrepancies between cluster evaluation metrics prevent us from declaring it an absolutely superior algorithm. Interestingly, Louvain performed better than Infomap in nearly all the tests in our study, contradicting the results of previous work in which Infomap was superior to Louvain. We find that although label propagation performs poorly when clusters are less clearly defined, it scales efficiently and accurately to large graphs with well-defined clusters.
Video quality assessment using a statistical model of human visual speed perception.
Wang, Zhou; Li, Qiang
2007-12-01
Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.
A guide to calculating habitat-quality metrics to inform conservation of highly mobile species
Bieri, Joanna A.; Sample, Christine; Thogmartin, Wayne E.; Diffendorfer, James E.; Earl, Julia E.; Erickson, Richard A.; Federico, Paula; Flockhart, D. T. Tyler; Nicol, Sam; Semmens, Darius J.; Skraber, T.; Wiederholt, Ruscena; Mattsson, Brady J.
2018-01-01
Many metrics exist for quantifying the relative value of habitats and pathways used by highly mobile species. Properly selecting and applying such metrics requires substantial background in mathematics and understanding the relevant management arena. To address this multidimensional challenge, we demonstrate and compare three measurements of habitat quality: graph-, occupancy-, and demographic-based metrics. Each metric provides insights into system dynamics, at the expense of increasing amounts and complexity of data and models. Our descriptions and comparisons of diverse habitat-quality metrics provide means for practitioners to overcome the modeling challenges associated with management or conservation of such highly mobile species. Whereas previous guidance for applying habitat-quality metrics has been scattered in diversified tracks of literature, we have brought this information together into an approachable format including accessible descriptions and a modeling case study for a typical example that conservation professionals can adapt for their own decision contexts and focal populations.Considerations for Resource ManagersManagement objectives, proposed actions, data availability and quality, and model assumptions are all relevant considerations when applying and interpreting habitat-quality metrics.Graph-based metrics answer questions related to habitat centrality and connectivity, are suitable for populations with any movement pattern, quantify basic spatial and temporal patterns of occupancy and movement, and require the least data.Occupancy-based metrics answer questions about likelihood of persistence or colonization, are suitable for populations that undergo localized extinctions, quantify spatial and temporal patterns of occupancy and movement, and require a moderate amount of data.Demographic-based metrics answer questions about relative or absolute population size, are suitable for populations with any movement pattern, quantify demographic processes and population dynamics, and require the most data.More real-world examples applying occupancy-based, agent-based, and continuous-based metrics to seasonally migratory species are needed to better understand challenges and opportunities for applying these metrics more broadly.
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Řeřábek, Martin; Ebrahimi, Touradj
2015-09-01
This paper reports the details and results of the subjective evaluations conducted at EPFL to evaluate the responses to the Call for Evidence (CfE) for High Dynamic Range (HDR) and Wide Color Gamut (WCG) Video Coding issued by Moving Picture Experts Group (MPEG). The CfE on HDR/WCG Video Coding aims to explore whether the coding efficiency and/or the functionality of the current version of HEVC standard can be signi_cantly improved for HDR and WCG content. In total, nine submissions, five for Category 1 and four for Category 3a, were compared to the HEVC Main 10 Profile based Anchor. More particularly, five HDR video contents, compressed at four bit rates by each proponent responding to the CfE, were used in the subjective evaluations. Further, the side-by-side presentation methodology was used for the subjective experiment to discriminate small differences between the Anchor and proponents. Subjective results shows that the proposals provide evidence that the coding efficiency can be improved in a statistically noticeable way over MPEG CfE Anchors in terms of perceived quality within the investigated content. The paper further benchmarks the selected objective metrics based on their correlations with the subjective ratings. It is shown that PSNR-DE1000, HDRVDP- 2, and PSNR-Lx can reliably detect visible differences between the proposed encoding solutions and current HEVC standard.
Defining quality metrics and improving safety and outcome in allergy care.
Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J
2014-04-01
The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.
Data simulation for the Lightning Imaging Sensor (LIS)
NASA Technical Reports Server (NTRS)
Boeck, William L.
1991-01-01
This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years.
2014-01-01
Background Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. Results We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Conclusions Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation. PMID:25350128
Zhang, Wenchao; Zhao, Patrick X
2014-01-01
Extracted ion chromatogram (EIC) extraction and chromatographic peak detection are two important processing procedures in liquid chromatography/mass spectrometry (LC/MS)-based metabolomics data analysis. Most commonly, the LC/MS technique employs electrospray ionization as the ionization method. The EICs from LC/MS data are often noisy and contain high background signals. Furthermore, the chromatographic peak quality varies with respect to its location in the chromatogram and most peaks have zigzag shapes. Therefore, there is a critical need to develop effective metrics for quality evaluation of EICs and chromatographic peaks in LC/MS based metabolomics data analysis. We investigated a comprehensive set of potential quality evaluation metrics for extracted EICs and detected chromatographic peaks. Specifically, for EIC quality evaluation, we analyzed the mass chromatographic quality index (MCQ index) and propose a novel quality evaluation metric, the EIC-related global zigzag index, which is based on an EIC's first order derivatives. For chromatographic peak quality evaluation, we analyzed and compared six metrics: sharpness, Gaussian similarity, signal-to-noise ratio, peak significance level, triangle peak area similarity ratio and the local peak-related local zigzag index. Although the MCQ index is suited for selecting and aligning analyte components, it cannot fairly evaluate EICs with high background signals or those containing only a single peak. Our proposed EIC related global zigzag index is robust enough to evaluate EIC qualities in both scenarios. Of the six peak quality evaluation metrics, the sharpness, peak significance level, and zigzag index outperform the others due to the zigzag nature of LC/MS chromatographic peaks. Furthermore, using several peak quality metrics in combination is more efficient than individual metrics in peak quality evaluation.
Development of quality metrics for ambulatory pediatric cardiology: Infection prevention.
Johnson, Jonathan N; Barrett, Cindy S; Franklin, Wayne H; Graham, Eric M; Halnon, Nancy J; Hattendorf, Brandy A; Krawczeski, Catherine D; McGovern, James J; O'Connor, Matthew J; Schultz, Amy H; Vinocur, Jeffrey M; Chowdhury, Devyani; Anderson, Jeffrey B
2017-12-01
In 2012, the American College of Cardiology's (ACC) Adult Congenital and Pediatric Cardiology Council established a program to develop quality metrics to guide ambulatory practices for pediatric cardiology. The council chose five areas on which to focus their efforts; chest pain, Kawasaki Disease, tetralogy of Fallot, transposition of the great arteries after arterial switch, and infection prevention. Here, we sought to describe the process, evaluation, and results of the Infection Prevention Committee's metric design process. The infection prevention metrics team consisted of 12 members from 11 institutions in North America. The group agreed to work on specific infection prevention topics including antibiotic prophylaxis for endocarditis, rheumatic fever, and asplenia/hyposplenism; influenza vaccination and respiratory syncytial virus prophylaxis (palivizumab); preoperative methods to reduce intraoperative infections; vaccinations after cardiopulmonary bypass; hand hygiene; and testing to identify splenic function in patients with heterotaxy. An extensive literature review was performed. When available, previously published guidelines were used fully in determining metrics. The committee chose eight metrics to submit to the ACC Quality Metric Expert Panel for review. Ultimately, metrics regarding hand hygiene and influenza vaccination recommendation for patients did not pass the RAND analysis. Both endocarditis prophylaxis metrics and the RSV/palivizumab metric passed the RAND analysis but fell out during the open comment period. Three metrics passed all analyses, including those for antibiotic prophylaxis in patients with heterotaxy/asplenia, for influenza vaccination compliance in healthcare personnel, and for adherence to recommended regimens of secondary prevention of rheumatic fever. The lack of convincing data to guide quality improvement initiatives in pediatric cardiology is widespread, particularly in infection prevention. Despite this, three metrics were able to be developed for use in the ACC's quality efforts for ambulatory practice. © 2017 Wiley Periodicals, Inc.
Narayan, Anand; Cinelli, Christina; Carrino, John A; Nagy, Paul; Coresh, Josef; Riese, Victoria G; Durand, Daniel J
2015-11-01
As the US health care system transitions toward value-based reimbursement, there is an increasing need for metrics to quantify health care quality. Within radiology, many quality metrics are in use, and still more have been proposed, but there have been limited attempts to systematically inventory these measures and classify them using a standard framework. The purpose of this study was to develop an exhaustive inventory of public and private sector imaging quality metrics classified according to the classic Donabedian framework (structure, process, and outcome). A systematic review was performed in which eligibility criteria included published articles (from 2000 onward) from multiple databases. Studies were double-read, with discrepancies resolved by consensus. For the radiology benefit management group (RBM) survey, the six known companies nationally were surveyed. Outcome measures were organized on the basis of standard categories (structure, process, and outcome) and reported using Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The search strategy yielded 1,816 citations; review yielded 110 reports (29 included for final analysis). Three of six RBMs (50%) responded to the survey; the websites of the other RBMs were searched for additional metrics. Seventy-five unique metrics were reported: 35 structure (46%), 20 outcome (27%), and 20 process (27%) metrics. For RBMs, 35 metrics were reported: 27 structure (77%), 4 process (11%), and 4 outcome (11%) metrics. The most commonly cited structure, process, and outcome metrics included ACR accreditation (37%), ACR Appropriateness Criteria (85%), and peer review (95%), respectively. Imaging quality metrics are more likely to be structural (46%) than process (27%) or outcome (27%) based (P < .05). As national value-based reimbursement programs increasingly emphasize outcome-based metrics, radiologists must keep pace by developing the data infrastructure required to collect outcome-based quality metrics. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Tolu, Sena; Yurdakul, Ozan Volkan; Basaran, Betul; Rezvani, Aylin
2018-05-14
The aim of this study was to evaluate the reliability, content, and quality of videos for patients available on YouTube for learning how to self-administer subcutaneous anti-tumour necrosis factor (TNF) injections. We searched for the terms Humira injection, Enbrel injection, Simponi injection, and Cimzia injection. Videos were categorised as useful information, misleading information, useful patient opinion, and misleading patient opinion by two physicians. Videos were rated for quality on a 5-point global quality scale (GQS; 1 = poor quality, 5 = excellent quality) and reliability and content using the 5-point DISCERN scale (higher scores represent greater reliability and more comprehensive videos). Of the 142 English videos, 24 (16.9%) videos were classified as useful information, 6 (4.2%) as misleading information, 47 (33.1%) as useful patient opinion, and 65 (45.8%) as misleading patient opinion. Useful videos were the most comprehensive and had the highest reliability and quality scores. The useful information and useful patient opinion videos had the highest numbers of views per day (median 8.32, IQR: 3.40-14.28 and 5.46, IQR: 3.06-14.44), as compared with 2.32, IQR: 1.63-6.26 for misleading information videos and 2.15, IQR: 1.17-7.43 for misleading patient opinion videos (p = 0.001). Almost all (91.5%) misleading videos were uploaded by individual users. There are a substantial number of English-language YouTube videos, with high quality, and rich content and reliability that can be sources of information on proper technique of anti-TNF self-injections. Physicians should direct patients to the reliable resources of information and educate them in online resource assessment, thereby improving treatment outcomes.
Evaluating which plan quality metrics are appropriate for use in lung SBRT.
Yaparpalvi, Ravindra; Garg, Madhur K; Shen, Jin; Bodner, William R; Mynampati, Dinesh K; Gafar, Aleiya; Kuo, Hsiang-Chi; Basavatia, Amar K; Ohri, Nitin; Hong, Linda X; Kalnicki, Shalom; Tome, Wolfgang A
2018-02-01
Several dose metrics in the categories-homogeneity, coverage, conformity and gradient have been proposed in literature for evaluating treatment plan quality. In this study, we applied these metrics to characterize and identify the plan quality metrics that would merit plan quality assessment in lung stereotactic body radiation therapy (SBRT) dose distributions. Treatment plans of 90 lung SBRT patients, comprising 91 targets, treated in our institution were retrospectively reviewed. Dose calculations were performed using anisotropic analytical algorithm (AAA) with heterogeneity correction. A literature review on published plan quality metrics in the categories-coverage, homogeneity, conformity and gradient was performed. For each patient, using dose-volume histogram data, plan quality metric values were quantified and analysed. For the study, the radiation therapy oncology group (RTOG) defined plan quality metrics were: coverage (0.90 ± 0.08); homogeneity (1.27 ± 0.07); conformity (1.03 ± 0.07) and gradient (4.40 ± 0.80). Geometric conformity strongly correlated with conformity index (p < 0.0001). Gradient measures strongly correlated with target volume (p < 0.0001). The RTOG lung SBRT protocol advocated conformity guidelines for prescribed dose in all categories were met in ≥94% of cases. The proportion of total lung volume receiving doses of 20 Gy and 5 Gy (V 20 and V 5 ) were mean 4.8% (±3.2) and 16.4% (±9.2), respectively. Based on our study analyses, we recommend the following metrics as appropriate surrogates for establishing SBRT lung plan quality guidelines-coverage % (ICRU 62), conformity (CN or CI Paddick ) and gradient (R 50% ). Furthermore, we strongly recommend that RTOG lung SBRT protocols adopt either CN or CI Padddick in place of prescription isodose to target volume ratio for conformity index evaluation. Advances in knowledge: Our study metrics are valuable tools for establishing lung SBRT plan quality guidelines.
Recognizing human activities using appearance metric feature and kinematics feature
NASA Astrophysics Data System (ADS)
Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye
2017-05-01
The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.
Robustness of remote stress detection from visible spectrum recordings
NASA Astrophysics Data System (ADS)
Kaur, Balvinder; Moses, Sophia; Luthra, Megha; Ikonomidou, Vasiliki N.
2016-05-01
In our recent work, we have shown that it is possible to extract high fidelity timing information of the cardiac pulse wave from visible spectrum videos, which can then be used as a basis for stress detection. In that approach, we used both heart rate variability (HRV) metrics and the differential pulse transit time (dPTT) as indicators of the presence of stress. One of the main concerns in this analysis is its robustness in the presence of noise, as the remotely acquired signal that we call blood wave (BW) signal is degraded with respect to the signal acquired using contact sensors. In this work, we discuss the robustness of our metrics in the presence of multiplicative noise. Specifically, we study the effects of subtle motion due to respiration and changes in illumination levels due to light flickering on the BW signal, the HRV-driven features, and the dPTT. Our sensitivity study involved both Monte Carlo simulations and experimental data from human facial videos, and indicates that our metrics are robust even under moderate amounts of noise. Generated results will help the remote stress detection community with developing requirements for visual spectrum based stress detection systems.
Ho, Matthew; Stothers, Lynn; Lazare, Darren; Tsang, Brian; Macnab, Andrew
2015-01-01
Many patients conduct internet searches to manage their own health problems, to decide if they need professional help, and to corroborate information given in a clinical encounter. Good information can improve patients' understanding of their condition and their self-efficacy. Patients with spinal cord injury (SCI) featuring neurogenic bladder (NB) require knowledge and skills related to their condition and need for intermittent catheterization (IC). Information quality was evaluated in videos accessed via YouTube relating to NB and IC using search terms "neurogenic bladder intermittent catheter" and "spinal cord injury intermittent catheter." Video content was independently rated by 3 investigators using criteria based on European Urological Association (EAU) guidelines and established clinical practice. In total, 71 videos met the inclusion criteria. Of these, 12 (17%) addressed IC and 50 (70%) contained information on NB. The remaining videos met inclusion criteria, but did not contain information relevant to either IC or NB. Analysis indicated poor overall quality of information, with some videos with information contradictory to EAU guidelines for IC. High-quality videos were randomly distributed by YouTube. IC videos featuring a healthcare narrator scored significantly higher than patient-narrated videos, but not higher than videos with a merchant narrator. About half of the videos contained commercial content. Some good-quality educational videos about NB and IC are available on YouTube, but most are poor. The videos deemed good quality were not prominently ranked by the YouTube search algorithm, consequently user access is less likely. Study limitations include the limit of 50 videos per category and the use of a de novo rating tool. Information quality in videos with healthcare narrators was not higher than in those featuring merchant narrators. Better material is required to improve patients' understanding of their condition.
No-reference video quality measurement: added value of machine learning
NASA Astrophysics Data System (ADS)
Mocanu, Decebal Constantin; Pokhrel, Jeevan; Garella, Juan Pablo; Seppänen, Janne; Liotou, Eirini; Narwaria, Manish
2015-11-01
Video quality measurement is an important component in the end-to-end video delivery chain. Video quality is, however, subjective, and thus, there will always be interobserver differences in the subjective opinion about the visual quality of the same video. Despite this, most existing works on objective quality measurement typically focus only on predicting a single score and evaluate their prediction accuracies based on how close it is to the mean opinion scores (or similar average based ratings). Clearly, such an approach ignores the underlying diversities in the subjective scoring process and, as a result, does not allow further analysis on how reliable the objective prediction is in terms of subjective variability. Consequently, the aim of this paper is to analyze this issue and present a machine-learning based solution to address it. We demonstrate the utility of our ideas by considering the practical scenario of video broadcast transmissions with focus on digital terrestrial television (DTT) and proposing a no-reference objective video quality estimator for such application. We conducted meaningful verification studies on different video content (including video clips recorded from real DTT broadcast transmissions) in order to verify the performance of the proposed solution.
'How to stop a nosebleed': an assessment of the quality of epistaxis treatment advice on YouTube.
Haymes, A T; Harries, V
2016-08-01
Video hosting websites are increasingly being used to disseminate health education messages. This study aimed to assess the quality of advice contained within YouTube videos on the conservative management of epistaxis. YouTube.com was searched using the phrase 'how to stop a nosebleed'. The first 50 videos were screened. Objective advice scores and subjective production quality scores were attributed by independent raters. Forty-five videos were analysed. The mean advice score was 2.0 out of 8 and the mean production quality score was 1.6 out of 3. There were no correlations between a video's advice score and its search results rank (ρ = -0.28, p = 0.068), its view count (ρ = 0.20, p = 0.19) or its number of 'likes' (ρ = 0.21, p = 0.18). The quality of information on conservative epistaxis management within YouTube videos is extremely variable. A high search rank is no indication of video quality. Many videos proffer inappropriate and dangerous 'alternative' advice. We do not recommend YouTube as a source for patient information.
Colonoscopy Quality: Metrics and Implementation
Calderwood, Audrey H.; Jacobson, Brian C.
2013-01-01
Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862
Jarc, Anthony M; Curet, Myriam
2015-08-01
Validated training exercises are essential tools for surgeons as they develop technical skills to use robot-assisted minimally invasive surgical systems. The purpose of this study was to show face, content, and construct validity of four, inanimate training exercises using the da Vinci (®) Si surgical system configured with Single-Site (™) instrumentation. New (N = 21) and experienced (N = 6) surgeons participated in the study. New surgeons (11 Gynecology [GYN] and 10 General Surgery [GEN]) had not completed any da Vinci Single-Site cases but may have completed multiport cases using the da Vinci system. They participated in this study prior to attending a certification course focused on da Vinci Single-Site instrumentation. Experienced surgeons (5 GYN and 1 GEN) had completed at least 25 da Vinci Single-Site cases. The surgeons completed four inanimate training exercises and then rated them with a questionnaire. Raw metrics and overall normalized scores were computed using both video recordings and kinematic data collected from the surgical system. The experienced surgeons significantly outperformed new surgeons for many raw metrics and the overall normalized scores derived from video review (p < 0.05). Only one exercise did not achieve a significant difference between new and experienced surgeons (p = 0.08) when calculating an overall normalized score using both video and advanced metrics derived from kinematic data. Both new and experienced surgeons rated the training exercises as appearing, to train and measure technical skills used during da Vinci Single-Site surgery and actually testing the technical skills used during da Vinci Single-Site surgery. In summary, the four training exercises showed face, content, and construct validity. Improved overall scores could be developed using additional metrics not included in this study. The results suggest that the training exercises could be used in an overall training curriculum aimed at developing proficiency in technical skills for surgeons new to da Vinci Single-Site instrumentation.
NASA Astrophysics Data System (ADS)
Sevcik, L.; Uhrin, D.; Frnda, J.; Voznak, M.; Toral-Cruz, Homer; Mikulec, M.; Jakovlev, Sergej
2015-05-01
Nowadays, the interest in real-time services, like audio and video, is growing. These services are mostly transmitted over packet networks, which are based on IP protocol. It leads to analyses of these services and their behavior in such networks which are becoming more frequent. Video has become the significant part of all data traffic sent via IP networks. In general, a video service is one-way service (except e.g. video calls) and network delay is not such an important factor as in a voice service. Dominant network factors that influence the final video quality are especially packet loss, delay variation and the capacity of the transmission links. Analysis of video quality concentrates on the resistance of video codecs to packet loss in the network, which causes artefacts in the video. IPsec provides confidentiality in terms of safety, integrity and non-repudiation (using HMAC-SHA1 and 3DES encryption for confidentiality and AES in CBC mode) with an authentication header and ESP (Encapsulating Security Payload). The paper brings a detailed view of the performance of video streaming over an IP-based network. We compared quality of video with packet loss and encryption as well. The measured results demonstrated the relation between the video codec type and bitrate to the final video quality.
A software quality model and metrics for risk assessment
NASA Technical Reports Server (NTRS)
Hyatt, L.; Rosenberg, L.
1996-01-01
A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.
MO-A-16A-01: QA Procedures and Metrics: In Search of QA Usability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathiaseelan, V; Thomadsen, B
Radiation therapy has undergone considerable changes in the past two decades with a surge of new technology and treatment delivery methods. The complexity of radiation therapy treatments has increased and there has been increased awareness and publicity about the associated risks. In response, there has been proliferation of guidelines for medical physicists to adopt to ensure that treatments are delivered safely. Task Group recommendations are copious, and clinical physicists' hours are longer, stretched to various degrees between site planning and management, IT support, physics QA, and treatment planning responsibilities.Radiation oncology has many quality control practices in place to ensure themore » delivery of high-quality, safe treatments. Incident reporting systems have been developed to collect statistics about near miss events at many radiation oncology centers. However, tools are lacking to assess the impact of these various control measures. A recent effort to address this shortcoming is the work of Ford et al (2012) who recently published a methodology enumerating quality control quantification for measuring the effectiveness of safety barriers. Over 4000 near-miss incidents reported from 2 academic radiation oncology clinics were analyzed using quality control quantification, and a profile of the most effective quality control measures (metrics) was identified.There is a critical need to identify a QA metric to help the busy clinical physicists to focus their limited time and resources most effectively in order to minimize or eliminate errors in the radiation treatment delivery processes. In this symposium the usefulness of workflows and QA metrics to assure safe and high quality patient care will be explored.Two presentations will be given:Quality Metrics and Risk Management with High Risk Radiation Oncology ProceduresStrategies and metrics for quality management in the TG-100 Era Learning Objectives: Provide an overview and the need for QA usability metrics: Different cultures/practices affecting the effectiveness of methods and metrics. Show examples of quality assurance workflows, Statistical process control, that monitor the treatment planning and delivery process to identify errors. To learn to identify and prioritize risks and QA procedures in radiation oncology. Try to answer the question: Can a quality assurance program aided by quality assurance metrics help minimize errors and ensure safe treatment delivery. Should such metrics be institution specific.« less
DCT-based cyber defense techniques
NASA Astrophysics Data System (ADS)
Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer
2015-09-01
With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.
Adaptive sigmoid function bihistogram equalization for image contrast enhancement
NASA Astrophysics Data System (ADS)
Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe
2015-09-01
Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.
The data quality analyzer: a quality control program for seismic data
Ringler, Adam; Hagerty, M.T.; Holland, James F.; Gonzales, A.; Gee, Lind S.; Edwards, J.D.; Wilson, David; Baker, Adam
2015-01-01
The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a “grade” for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
A condition metric for Eucalyptus woodland derived from expert evaluations.
Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D
2018-02-01
The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.
YouTube provides irrelevant information for the diagnosis and treatment of hip arthritis.
Koller, Ulrich; Waldstein, Wenzel; Schatz, Klaus-Dieter; Windhager, Reinhard
2016-10-01
YouTube is increasingly becoming a key source for people to satisfy the need for additional information concerning their medical condition. This study analyses the completeness of accurate information found on YouTube pertaining to hip arthritis. The present study analyzed 133 YouTube videos using the search terms: hip arthritis, hip arthritis symptoms, hip arthritis diagnosis, hip arthritis treatment and hip replacement. Two quality assessment checklists with a scale of 0 to 12 points were developed to evaluate available video content for the diagnosis and the treatment of hip arthritis. Videos were grouped into poor quality (grade 0-3), moderate quality (grade 4-7) and excellent quality (grade 8-12), respectively. Three independent observers assessed all videos using the new grading system and independently scored all videos. Discrepancies regarding the categories were clarified by consensus discussion. For intra-observer reliabilities, grading was performed at two occasions separated by four weeks. Eighty-four percent (n = 112) had a poor diagnostic information quality, 14% (n = 19) a moderate quality and only 2% (n = 2) an excellent quality, respectively. In 86% (n = 114), videos provided poor treatment information quality. Eleven percent (n = 15) of videos had a moderate quality and only 3% (n = 4) an excellent quality, respectively. The present study demonstrates that YouTube is a poor source for accurate information pertaining to the diagnosis and treatment of hip arthritis. These finding are of high relevance for clinicians as videos are going to become the primary source of information for patients. Therefore, high quality educational videos are needed to further guide patients on the way from the diagnosis of hip arthritis to its proper treatment.
Performance evaluation of the intra compression in the video coding standards
NASA Astrophysics Data System (ADS)
Abramowski, Andrzej
2015-09-01
The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.
Geographic techniques and recent applications of remote sensing to landscape-water quality studies
Griffith, J.A.
2002-01-01
This article overviews recent advances in studies of landscape-water quality relationships using remote sensing techniques. With the increasing feasibility of using remotely-sensed data, landscape-water quality studies can now be more easily performed on regional, multi-state scales. The traditional method of relating land use and land cover to water quality has been extended to include landscape pattern and other landscape information derived from satellite data. Three items are focused on in this article: 1) the increasing recognition of the importance of larger-scale studies of regional water quality that require a landscape perspective; 2) the increasing importance of remotely sensed data, such as the imagery-derived normalized difference vegetation index (NDVI) and vegetation phenological metrics derived from time-series NDVI data; and 3) landscape pattern. In some studies, using landscape pattern metrics explained some of the variation in water quality not explained by land use/cover. However, in some other studies, the NDVI metrics were even more highly correlated to certain water quality parameters than either landscape pattern metrics or land use/cover proportions. Although studies relating landscape pattern metrics to water quality have had mixed results, this recent body of work applying these landscape measures and satellite-derived metrics to water quality analysis has demonstrated their potential usefulness in monitoring watershed conditions across large regions.
ERIC Educational Resources Information Center
NatureScope, 1988
1988-01-01
Provides a glossary and bibliography which includes a listing of the following: general reference books, field guides, children's books, films, filmstrips, slides, videos, coloring books, games, posters, software, activity sources, where to get more information, Ranger Rick Ocean Index, and a metric conversion chart. (RT)
The Albuquerque Seismological Laboratory Data Quality Analyzer
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M.; Holland, J.; Gee, L. S.; Wilson, D.
2013-12-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several efforts underway to improve data quality at its stations. The Data Quality Analyzer (DQA) is one such development. The DQA is designed to characterize station data quality in a quantitative and automated manner. Station quality is based on the evaluation of various metrics, such as timing quality, noise levels, sensor coherence, and so on. These metrics are aggregated into a measurable grade for each station. The DQA consists of a website, a metric calculator (Seedscan), and a PostgreSQL database. The website allows the user to make requests for various time periods, review specific networks and stations, adjust weighting of the station's grade, and plot metrics as a function of time. The website dynamically loads all station data from a PostgreSQL database. The database is central to the application; it acts as a hub where metric values and limited station descriptions are stored. Data is stored at the level of one sensor's channel per day. The database is populated by Seedscan. Seedscan reads and processes miniSEED data, to generate metric values. Seedscan, written in Java, compares hashes of metadata and data to detect changes and perform subsequent recalculations. This ensures that the metric values are up to date and accurate. Seedscan can be run in a scheduled task or on demand by way of a config file. It will compute metrics specified in its configuration file. While many metrics are currently in development, some are completed and being actively used. These include: availability, timing quality, gap count, deviation from the New Low Noise Model, deviation from a station's noise baseline, inter-sensor coherence, and data-synthetic fits. In all, 20 metrics are planned, but any number could be added. ASL is actively using the DQA on a daily basis for station diagnostics and evaluation. As Seedscan is scheduled to run every night, data quality analysts are able to then use the website to diagnose changes in noise levels or other anomalous data. This allows for errors to be corrected quickly and efficiently. The code is designed to be flexible for adding metrics and portable for use in other networks. We anticipate further development of the DQA by improving the existing web-interface, adding more metrics, adding an interface to facilitate the verification of historic station metadata and performance, and an interface to allow better monitoring of data quality goals.
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors
Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee
2012-01-01
In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181
Exploring the explaining quality of physics online explanatory videos
NASA Astrophysics Data System (ADS)
Kulgemeyer, Christoph; Peters, Cord H.
2016-11-01
Explaining skills are among the most important skills educators possess. Those skills have also been researched in recent years. During the same period, another medium has additionally emerged and become a popular source of information for learners: online explanatory videos, chiefly from the online video sharing website YouTube. Their content and explaining quality remain to this day mostly unmonitored, as well is their educational impact in formal contexts such as schools or universities. In this study, a framework for explaining quality, which has emerged from surveying explaining skills in expert-novice face-to-face dialogues, was used to explore the explaining quality of such videos (36 YouTube explanatory videos on Kepler’s laws and 15 videos on Newton’s third law). The framework consists of 45 categories derived from physics education research that deal with explanation techniques. YouTube provides its own ‘quality measures’ based on surface features including ‘likes’, views, and comments for each video. The question is whether or not these measures provide valid information for educators and students if they have to decide which video to use. We compared the explaining quality with those measures. Our results suggest that there is a correlation between explaining quality and only one of these measures: the number of content-related comments.
Software metrics: The key to quality software on the NCC project
NASA Technical Reports Server (NTRS)
Burns, Patricia J.
1993-01-01
Network Control Center (NCC) Project metrics are captured during the implementation and testing phases of the NCCDS software development lifecycle. The metrics data collection and reporting function has interfaces with all elements of the NCC project. Close collaboration with all project elements has resulted in the development of a defined and repeatable set of metrics processes. The resulting data are used to plan and monitor release activities on a weekly basis. The use of graphical outputs facilitates the interpretation of progress and status. The successful application of metrics throughout the NCC project has been instrumental in the delivery of quality software. The use of metrics on the NCC Project supports the needs of the technical and managerial staff. This paper describes the project, the functions supported by metrics, the data that are collected and reported, how the data are used, and the improvements in the quality of deliverable software since the metrics processes and products have been in use.
Gamble, J M; Traynor, Robyn L; Gruzd, Anatoliy; Mai, Philip; Dormuth, Colin R; Sketris, Ingrid S
2018-03-24
To provide an overview of altmetrics, including their potential benefits and limitations, how they may be obtained, and their role in assessing pharmacoepidemiologic research impact. Our review was informed by compiling relevant literature identified through searching multiple health research databases (PubMed, Embase, and CIHNAHL) and grey literature sources (websites, blogs, and reports). We demonstrate how pharmacoepidemiologists, in particular, may use altmetrics to understand scholarly impact and knowledge translation by providing a case study of a drug-safety study conducted by the Canadian Network of Observational Drug Effect Studies. A common approach to measuring research impact is the use of citation-based metrics, such as an article's citation count or a journal's impact factor. "Alternative" metrics, or altmetrics, are increasingly supported as a complementary measure of research uptake in the age of social media. Altmetrics are nontraditional indicators that capture a diverse set of traceable, online research-related artifacts including peer-reviewed publications and other research outputs (software, datasets, blogs, videos, posters, policy documents, presentations, social media posts, wiki entries, etc). Compared with traditional citation-based metrics, altmetrics take a more holistic view of research impact, attempting to capture the activity and engagement of both scholarly and nonscholarly communities. Despite the limited theoretical underpinnings, possible commercial influence, potential for gaming and manipulation, and numerous data quality-related issues, altmetrics are promising as a supplement to more traditional citation-based metrics because they can ingest and process a larger set of data points related to the flow and reach of scholarly communication from an expanded pool of stakeholders. Unlike citation-based metrics, altmetrics are not inherently rooted in the research publication process, which includes peer review; it is unclear to what extent they should be used for research evaluation. © 2018 The Authors. Pharmacoepidemiology and Drug Safety. Published by John Wiley & Sons, Ltd.
Applying Sigma Metrics to Reduce Outliers.
Litten, Joseph
2017-03-01
Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Poisson, Sharon N.; Josephson, S. Andrew
2011-01-01
Stroke is a major public health burden, and accounts for many hospitalizations each year. Due to gaps in practice and recommended guidelines, there has been a recent push toward implementing quality measures to be used for improving patient care, comparing institutions, as well as for rewarding or penalizing physicians through pay-for-performance. This article reviews the major organizations involved in implementing quality metrics for stroke, and the 10 major metrics currently being tracked. We also discuss possible future metrics and the implications of public reporting and using metrics for pay-for-performance. PMID:23983840
Layer-based buffer aware rate adaptation design for SHVC video streaming
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan
2016-09-01
This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.
Quality versus intelligibility: studying human preferences for American Sign Language video
NASA Astrophysics Data System (ADS)
Ciaramello, Frank M.; Hemami, Sheila S.
2011-03-01
Real-time videoconferencing using cellular devices provides natural communication to the Deaf community. For this application, compressed American Sign Language (ASL) video must be evaluated in terms of the intelligibility of the conversation and not in terms of the overall aesthetic quality of the video. This work presents a paired comparison experiment to determine the subjective preferences of ASL users in terms of the trade-off between intelligibility and quality when varying the proportion of the bitrate allocated explicitly to the regions of the video containing the signer. A rate-distortion optimization technique, which jointly optimizes a quality criteria and an intelligibility criteria according to a user-specified parameter, generates test video pairs for the subjective experiment. Experimental results suggest that at sufficiently high bitrates, all users prefer videos in which the non-signer regions in the video are encoded with some nominal rate. As the total encoding bitrate decreases, users generally prefer video in which a greater proportion of the rate is allocated to the signer. The specific operating points preferred in the quality-intelligibility trade-off vary with the demographics of the users.
Shima, Yoichiro; Suwa, Akina; Gomi, Yuichiro; Nogawa, Hiroki; Nagata, Hiroshi; Tanaka, Hiroshi
2007-01-01
Real-time video pictures can be transmitted inexpensively via a broadband connection using the DVTS (digital video transport system). However, the degradation of video pictures transmitted by DVTS has not been sufficiently evaluated. We examined the application of DVTS to remote consultation by using images of laparoscopic and endoscopic surgeries. A subjective assessment by the double stimulus continuous quality scale (DSCQS) method of the transmitted video pictures was carried out by eight doctors. Three of the four video recordings were assessed as being transmitted with no degradation in quality. None of the doctors noticed any degradation in the images due to encryption by the VPN (virtual private network) system. We also used an automatic picture quality assessment system to make an objective assessment of the same images. The objective DSCQS values were similar to the subjective ones. We conclude that although the quality of video pictures transmitted by the DVTS was slightly reduced, they were useful for clinical purposes. Encryption with a VPN did not degrade image quality.
Ho, Matthew; Stothers, Lynn; Lazare, Darren; Tsang, Brian; Macnab, Andrew
2015-01-01
Introduction: Many patients conduct internet searches to manage their own health problems, to decide if they need professional help, and to corroborate information given in a clinical encounter. Good information can improve patients’ understanding of their condition and their self-efficacy. Patients with spinal cord injury (SCI) featuring neurogenic bladder (NB) require knowledge and skills related to their condition and need for intermittent catheterization (IC). Methods: Information quality was evaluated in videos accessed via YouTube relating to NB and IC using search terms “neurogenic bladder intermittent catheter” and “spinal cord injury intermittent catheter.” Video content was independently rated by 3 investigators using criteria based on European Urological Association (EAU) guidelines and established clinical practice. Results: In total, 71 videos met the inclusion criteria. Of these, 12 (17%) addressed IC and 50 (70%) contained information on NB. The remaining videos met inclusion criteria, but did not contain information relevant to either IC or NB. Analysis indicated poor overall quality of information, with some videos with information contradictory to EAU guidelines for IC. High-quality videos were randomly distributed by YouTube. IC videos featuring a healthcare narrator scored significantly higher than patient-narrated videos, but not higher than videos with a merchant narrator. About half of the videos contained commercial content. Conclusions: Some good-quality educational videos about NB and IC are available on YouTube, but most are poor. The videos deemed good quality were not prominently ranked by the YouTube search algorithm, consequently user access is less likely. Study limitations include the limit of 50 videos per category and the use of a de novo rating tool. Information quality in videos with healthcare narrators was not higher than in those featuring merchant narrators. Better material is required to improve patients’ understanding of their condition. PMID:26644803
National evaluation of multidisciplinary quality metrics for head and neck cancer.
Cramer, John D; Speedy, Sedona E; Ferris, Robert L; Rademaker, Alfred W; Patel, Urjeet A; Samant, Sandeep
2017-11-15
The National Quality Forum has endorsed quality-improvement measures for multiple cancer types that are being developed into actionable tools to improve cancer care. No nationally endorsed quality metrics currently exist for head and neck cancer. The authors identified patients with surgically treated, invasive, head and neck squamous cell carcinoma in the National Cancer Data Base from 2004 to 2014 and compared the rate of adherence to 5 different quality metrics and whether compliance with these quality metrics impacted overall survival. The metrics examined included negative surgical margins, neck dissection lymph node (LN) yield ≥ 18, appropriate adjuvant radiation, appropriate adjuvant chemoradiation, adjuvant therapy within 6 weeks, as well as overall quality. In total, 76,853 eligible patients were identified. There was substantial variability in patient-level adherence, which was 80% for negative surgical margins, 73.1% for neck dissection LN yield, 69% for adjuvant radiation, 42.6% for adjuvant chemoradiation, and 44.5% for adjuvant therapy within 6 weeks. Risk-adjusted Cox proportional-hazard models indicated that all metrics were associated with a reduced risk of death: negative margins (hazard ratio [HR] 0.73; 95% confidence interval [CI], 0.71-0.76), LN yield ≥ 18 (HR, 0.93; 95% CI, 0.89-0.96), adjuvant radiation (HR, 0.67; 95% CI, 0.64-0.70), adjuvant chemoradiation (HR, 0.84; 95% CI, 0.79-0.88), and adjuvant therapy ≤6 weeks (HR, 0.92; 95% CI, 0.89-0.96). Patients who received high-quality care had a 19% reduced adjusted hazard of mortality (HR, 0.81; 95% CI, 0.79-0.83). Five head and neck cancer quality metrics were identified that have substantial variability in adherence and meaningfully impact overall survival. These metrics are appropriate candidates for national adoption. Cancer 2017;123:4372-81. © 2017 American Cancer Society. © 2017 American Cancer Society.
An exploratory survey of methods used to develop measures of performance
NASA Astrophysics Data System (ADS)
Hamner, Kenneth L.; Lafleur, Charles A.
1993-09-01
Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.
Software Quality Metrics Enhancements. Volume 1
1980-04-01
the mathematical relationships which relate metrics to ratings of the various quality factors) for factors which were not validated previously were...function, provides a mathematical relationship between the metrics and the quality factors. (3) Validation of these normalization functions was performed by...samples, further research is needed before a high degree of confidence can be placed on the mathematical relationships established to date l (3.3.3) 6
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2001-10-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, 12 combinations of color space and quantization were selected, together with 12 histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-by-example scenario. For that purpose, a set of still-picture databases was built by extracting key frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2001-01-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, twelve combinations of color space and quantization were selected, together with twelve histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-be-example scenario. For that purpose, a set of still-picture databases was built by extracting key-frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Evaluation of the effectiveness of color attributes for video indexing
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Forest, Ronan
2000-12-01
Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, twelve combinations of color space and quantization were selected, together with twelve histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-be-example scenario. For that purpose, a set of still-picture databases was built by extracting key-frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.
Video medical interpretation over 3G cellular networks: a feasibility study.
Locatis, Craig; Williamson, Deborah; Sterrett, James; Detzler, Isabel; Ackerman, Michael
2011-12-01
To test the feasibility of using cell phone technology to provide video medical interpretation services at a distance. Alternative cell phone services were researched and videoconferencing technologies were tried out to identify video products and telecommunication services needed to meet video medical interpretation requirements. The video and telecommunication technologies were tried out in a pharmacy setting and compared with use of the telephone. Outcomes were similar to findings in previous research involving video medical interpretation with higher bandwidth and video quality. Patients appreciated the interpretation service no matter how it is provided, while health providers and interpreters preferred video. It is possible to provide video medical interpretation services via cellular communication using lower bandwidth videoconferencing technology that provides sufficient quality, at least in pharmacy settings. However, a number of issues need to be addressed to ensure quality of service.
Justus, Billy; Burge, David; Cobb, Jennifer; Marsico, Travis; Bouldin, Jennifer
2016-01-01
Methods for assessing wetland conditions must be established so wetlands can be monitored and ecological services can be protected. We evaluated biological indices compiled from macroinvertebrate and diatom metrics developed primarily for streams to assess their ability to indicate water quality in connected depression wetlands. We collected water-quality and biological samples at 24 connected depressions dominated by water tupelo (Nyssa aquatica) or bald cypress (Taxodium distichum) (water depths = 0.5–1.0 m). Water quality of the least-disturbed connected depressions was characteristic of swamps in the southeastern USA, which tend to have low specific conductance, nutrient concentrations, and pH. We compared 162 macroinvertebrate metrics and 123 diatom metrics with a water-quality disturbance gradient. For most metrics, we evaluated richness, % richness, abundance, and % relative abundance values. Three of the 4 macroinvertebrate metrics that were most beneficial for identifying disturbance in connected depressions decreased along the disturbance gradient even though they normally increase relative to stream disturbance. The negative relationship to disturbance of some taxa (e.g., dipterans, mollusks, and crustaceans) that are considered tolerant in streams suggests that the tolerance scale for some macroinvertebrates can differ markedly between streams and wetlands. Three of the 4 metrics chosen for the diatom index reflected published tolerances or fit the usual perception of metric response to disturbance. Both biological indices may be useful in connected depressions elsewhere in the Mississippi Alluvial Plain Ecoregion and could have application in other wetland types. Given the paradoxical relationship of some macroinvertebrate metrics to dissolved O2 (DO), we suggest that the diatom metrics may be easier to interpret and defend for wetlands with low DO concentrations in least-disturbed conditions.
Comparison of VATS and Robotic Approaches For Clinical Stage I and II NSCLC Using the STS Database
Louie, Brian E.; Wilson, Jennifer L.; Kim, Sunghee; Cerfolio, Robert J.; Park, Bernard J.; Farivar, Alexander S.; Vallières, Eric; Aye, Ralph W.; Burfeind, William R.; Block, Mark I.
2016-01-01
Background Data from selected centers show that robotic lobectomy (RL) is safe, effective and has comparable 30-day mortality to video assisted lobectomy (VATS). However, widespread adoption of RL is controversial. We used the STS-GTS-Database to evaluate quality metrics for these two minimally invasive lobectomy techniques. Methods A database query for primary clinical stage I or II NSCLC at high volume centers from 2009 to 2013 identified 1,220 RLs and 12,378 VATS. Quality metrics evaluated included operative morbidity, 30-day mortality and nodal upstaging (NU), defined as cN0 to pN1. Multivariable logistic regression was used to evaluate NU. Results RL patients were older, less active, less likely to be an ever smoker, and had higher BMI (all p<0.05). They were also more likely to have coronary heart disease or hypertension (all p<0.001) and to have had preoperative mediastinal staging (p<0.0001). RL operative times were longer (median 186 vs 173 min, p<0.001); all other operative parameters were similar. All postoperative outcomes were similar including complications and 30-day mortality (RL 0.6% vs VATS 0.8%, p=0.4). Median length of stay was 4 days for both, but a higher proportion of RLs stayed < 4 days: 48% vs 39%, p<0.001. NU overall was similar (p=0.6), but with trends favoring VATS in the cT1b group, and RL in the cT2a group. Conclusions RL patients had more co-morbidities and RL operative times were longer, but quality outcome measures including complications, hospital stay, 30-day mortality, and NU suggest RL and VATS are equivalent. PMID:27209613
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
Evaluating the Accuracy and Quality of the Information in Kyphosis Videos Shared on YouTube.
Erdem, Mehmet Nuri; Karaca, Sinan
2018-04-16
A quality-control YouTube-based study using the recognized quality scoring systems. In this study, our aim was to confirm the accuracy and quality of the information in kyphosis videos shared on YouTube. The Internet is a widely and increasingly used source for obtaining medical information both by patients and clinicians. YouTube, in particular, manifests itself as a leading source with its ease of access to information and visual advantage for Internet users. The first 50 videos returned by the YouTube search engine in response to 'kyphosis' keyword query were included in the study and categorized under seven and six groups, based on their source and content. The popularity of the videos were evaluated with a new index called the video power index (VPI). The quality, educational quality and accuracy of the source of information were measured using the JAMA score, Global Quality Score (GQS) and Kyphosis Specific Score (KSS). Videos had a mean duration of 397 seconds and a mean number of views of 131,644, with a total viewing number of 6,582,221. The source (uploader) in 36% of the videos was a trainer and the content in 46% of the videos was exercise training. 72% of the videos were about postural kyphosis. Videos had a mean JAMA score of 1.36 (range: 1 to 4), GQS of 1.68 (range: 1 to 5) and KSS of 3.02 (range:0 to 32). The academic group had the highest scores and the lowest VPIs. Online information on kyphosis is low quality and its contents are of unknown source and accuracy. In order to keep the balance in sharing the right information with the patient, clinicians should possess knowledge about the online information related to their field, and should contribute to the development of optimal medical videos. 3.
The role of optical flow in automated quality assessment of full-motion video
NASA Astrophysics Data System (ADS)
Harguess, Josh; Shafer, Scott; Marez, Diego
2017-09-01
In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.
Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.
Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C
2014-05-01
Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.
Miller, Anna N; Kozar, Rosemary; Wolinsky, Philip
2017-06-01
Reproducible metrics are needed to evaluate the delivery of orthopaedic trauma care, national care, norms, and outliers. The American College of Surgeons (ACS) is uniquely positioned to collect and evaluate the data needed to evaluate orthopaedic trauma care via the Committee on Trauma and the Trauma Quality Improvement Project. We evaluated the first quality metrics the ACS has collected for orthopaedic trauma surgery to determine whether these metrics can be appropriately collected with accuracy and completeness. The metrics include the time to administration of the first dose of antibiotics for open fractures, the time to surgical irrigation and débridement of open tibial fractures, and the percentage of patients who undergo stabilization of femoral fractures at trauma centers nationwide. These metrics were analyzed to evaluate for variances in the delivery of orthopaedic care across the country. The data showed wide variances for all metrics, and many centers had incomplete ability to collect the orthopaedic trauma care metrics. There was a large variability in the results of the metrics collected among different trauma center levels, as well as among centers of a particular level. The ACS has successfully begun tracking orthopaedic trauma care performance measures, which will help inform reevaluation of the goals and continued work on data collection and improvement of patient care. Future areas of research may link these performance measures with patient outcomes, such as long-term tracking, to assess nonunion and function. This information can provide insight into center performance and its effect on patient outcomes. The ACS was able to successfully collect and evaluate the data for three metrics used to assess the quality of orthopaedic trauma care. However, additional research is needed to determine whether these metrics are suitable for evaluating orthopaedic trauma care and cutoff values for each metric.
Angott, Andrea M.; Comerford, David A.; Ubel, Peter A.
2014-01-01
Objective To test a video intervention as a way to improve predictions of mood and quality-of-life with an emotionally evocative medical condition. Such predictions are typically inaccurate, which can be consequential for decision making. Method In Part 1, people presently or formerly living with ostomies predicted how watching a video depicting a person changing his ostomy pouch would affect mood and quality-of-life forecasts for life with an ostomy. In Part 2, participants from the general public read a description about life with an ostomy; half also watched a video depicting a person changing his ostomy pouch. Participants’ quality-of-life and mood forecasts for life with an ostomy were assessed. Results Contrary to our expectations, and the expectations of people presently or formerly living with ostomies, the video did not reduce mood or quality-of-life estimates, even among participants high in trait disgust sensitivity. Among low-disgust participants, watching the video increased quality-of-life predictions for ostomy. Conclusion Video interventions may improve mood and quality-of-life forecasts for medical conditions, including those that may elicit disgust, such as ostomy. Practice implications Video interventions focusing on patients’ experience of illness continue to show promise as components of decision aids, even for emotionally charged health states such as ostomy. PMID:23177398
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
Nissan, Michael E; Gupta, Amar; Rayess, Hani; Black, Kevin Z; Carron, Michael
2018-02-01
Physicians should be aware of both websites and videos available online regarding the otoplasty procedure to provide quality care. This study systematically analyzes the authorships, reliability, quality, and readability of the websites, as well as the authorships and primary objectives of the videos regarding otoplasty. Validated instruments were used to analyze the reliability, quality, and readability of websites, and videos were systematically categorized and analyzed. A Google search was conducted, and the first five pages of results were included in this study. After excluding unrelated websites, the remaining 44 websites were categorized by authorship (physician, patient, academic, or unaffiliated) and were analyzed using the validated DISCERN instrument for reliability and quality, as well as various other validated instruments to measure readability. A YouTube search was also conducted, and the first 50 relevant videos were included in the study. These videos were categorized by authorship and their primary objective. Website authorships were physician-dominated. Reliability, quality, and overall DISCERN score differ between the four authorship groups by a statistically significant margin (Kruskall-Wallis test, p < 0.05). Unaffiliated websites were the most reliable, and physician websites were the least reliable. Academic websites were of the highest quality, and patient websites were of the lowest quality. Readability did not differ significantly between the groups, though the readability measurements made showed a general lack of material easily readable by the general public. YouTube was likewise dominated by physician-authored videos. While the physician-authored videos sought mainly to inform and to advertise, patient-authored videos sought mainly to provide the patient's perspective. Academic organizations showed very little representation on YouTube, and the YouTube views on otoplasty videos were dominated by the top 20 videos, which represented over 93% of the total views of videos included in this study. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Memory colours and colour quality evaluation of conventional and solid-state lamps.
Smet, Kevin A G; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter
2010-12-06
A colour quality metric based on memory colours is presented. The basic idea is simple. The colour quality of a test source is evaluated as the degree of similarity between the colour appearance of a set of familiar objects and their memory colours. The closer the match, the better the colour quality. This similarity was quantified using a set of similarity distributions obtained by Smet et al. in a previous study. The metric was validated by calculating the Pearson and Spearman correlation coefficients between the metric predictions and the visual appreciation results obtained in a validation experiment conducted by the authors as well those obtained in two independent studies. The metric was found to correlate well with the visual appreciation of the lighting quality of the sources used in the three experiments. Its performance was also compared with that of the CIE colour rendering index and the NIST colour quality scale. For all three experiments, the metric was found to be significantly better at predicting the correct visual rank order of the light sources (p < 0.1).
Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips.
Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali; Sigaux, Nicolas
2016-09-01
Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable.
Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels
NASA Astrophysics Data System (ADS)
Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching
2006-12-01
This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.
Testing, Requirements, and Metrics
NASA Technical Reports Server (NTRS)
Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William
1998-01-01
The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.
Driving photomask supplier quality through automation
NASA Astrophysics Data System (ADS)
Russell, Drew; Espenscheid, Andrew
2007-10-01
In 2005, Freescale Semiconductor's newly centralized mask data prep organization (MSO) initiated a project to develop an automated global quality validation system for photomasks delivered to Freescale Semiconductor fabs. The system handles Certificate of Conformance (CofC) quality metric collection, validation, reporting and an alert system for all photomasks shipped to Freescale fabs from all qualified global suppliers. The completed system automatically collects 30+ quality metrics for each photomask shipped. Other quality metrics are generated from the collected data and quality metric conformance is automatically validated to specifications or control limits with failure alerts emailed to fab photomask and mask data prep engineering. A quality data warehouse stores the data for future analysis, which is performed quarterly. The improved access to data provided by the system has improved Freescale engineers' ability to spot trends and opportunities for improvement with our suppliers' processes. This paper will review each phase of the project, current system capabilities and quality system benefits for both our photomask suppliers and Freescale.
Kasturi, Rangachar; Goldgof, Dmitry; Soundararajan, Padmanabhan; Manohar, Vasant; Garofolo, John; Bowers, Rachel; Boonstra, Matthew; Korzhova, Valentina; Zhang, Jing
2009-02-01
Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, ground-truth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50-clip training set and a 50-clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the I-frame level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.
Examining the effect of task on viewing behavior in videos using saliency maps
NASA Astrophysics Data System (ADS)
Alers, Hani; Redi, Judith A.; Heynderickx, Ingrid
2012-03-01
Research has shown that when viewing still images, people will look at these images in a different manner if instructed to evaluate their quality. They will tend to focus less on the main features of the image and, instead, scan the entire image area looking for clues for its level of quality. It is questionable, however, whether this finding can be extended to videos considering their dynamic nature. One can argue that when watching a video the viewer will always focus on the dynamically changing features of the video regardless of the given task. To test whether this is true, an experiment was conducted where half of the participants viewed videos with the task of quality evaluation while the other half were simply told to watch the videos as if they were watching a movie on TV or a video downloaded from the internet. The videos contained content which was degraded with compression artifacts over a wide range of quality. An eye tracking device was used to record the viewing behavior in both conditions. By comparing the behavior during each task, it was possible to observe a systematic difference in the viewing behavior which seemed to correlate to the quality of the videos.
NASA Astrophysics Data System (ADS)
Gide, Milind S.; Karam, Lina J.
2016-08-01
With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this work, we discuss shortcomings in existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a 5-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. Additionally, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark.
Objective assessment of MPEG-2 video quality
NASA Astrophysics Data System (ADS)
Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano
2002-07-01
The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.
A Completely Blind Video Integrity Oracle.
Mittal, Anish; Saad, Michele A; Bovik, Alan C
2016-01-01
Considerable progress has been made toward developing still picture perceptual quality analyzers that do not require any reference picture and that are not trained on human opinion scores of distorted images. However, there do not yet exist any such completely blind video quality assessment (VQA) models. Here, we attempt to bridge this gap by developing a new VQA model called the video intrinsic integrity and distortion evaluation oracle (VIIDEO). The new model does not require the use of any additional information other than the video being quality evaluated. VIIDEO embodies models of intrinsic statistical regularities that are observed in natural vidoes, which are used to quantify disturbances introduced due to distortions. An algorithm derived from the VIIDEO model is thereby able to predict the quality of distorted videos without any external knowledge about the pristine source, anticipated distortions, or human judgments of video quality. Even with such a paucity of information, we are able to show that the VIIDEO algorithm performs much better than the legacy full reference quality measure MSE on the LIVE VQA database and delivers performance comparable with a leading human judgment trained blind VQA model. We believe that the VIIDEO algorithm is a significant step toward making real-time monitoring of completely blind video quality possible.
Colonoscopy video quality assessment using hidden Markov random fields
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby
2011-03-01
With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.
Wave Energy Prize - 1/20th Testing - SEWEC
Wesley Scharmen
2016-10-07
Data from the 1/20th scale testing data completed on the Wave Energy Prize for the SEWEC team, including the 1/20th scale test plan, raw test data, video, photos, and data analysis results. The top level objective of the 1/20th scale device testing is to obtain the necessary measurements required for determining Average Climate Capture Width per Characteristic Capital Expenditure (ACE) and the Hydrodynamic Performance Quality (HPQ), key metrics for determining the Wave Energy Prize (WEP) winners. * Note: During the TG4 judging meeting, the Wave Energy Prize judges reviewed the data collected during the testing of SEWEC's device at Carderock and determined that the data were inconclusive and did not allow an ACE value to be calculated for the device. Consequently, the SEWEC device was deemed ineligible to be considered for the Wave Energy Prize.
Study of Temporal Effects on Subjective Video Quality of Experience.
Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad
2017-11-01
HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.
Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips
Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali
2016-01-01
Summary: Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable. PMID:27757342
Leong, Amanda Y; Sanghera, Ravina; Jhajj, Jaspreet; Desai, Nandini; Jammu, Bikramjit Singh; Makowsky, Mark J
2017-12-25
To investigate the content, quality and popularity of information about type 2 diabetes available on YouTube. We searched YouTube with the terms Diabetes, Diabetes type 2, Diabetes South Asians, Diabetes Punjabi and Diabetes Hindi to identify videos concerning type 2 diabetes. A team of health-care providers independently classified the first 20 videos from each search as useful, misleading, or personal experience, rated them on a 5-point global quality scale (GQS) and categorized their content on a 26-point scale in duplicate. Useful videos were rated for reliability by using a 5-point modified DISCERN scale. Higher scores represent better quality, reliability and comprehensiveness. Of 100 videos, 71 met the inclusion criteria; 45 (63.4%) were rated as useful (median GQS, 3; interquartile range [IQR], 2 to 4); and 23 (32.4%) were deemed misleading (median GQS, 1; IQR, 1 to 2). Median reliability and content scores for useful videos were 3 (IQR, 2 to 3) and 5 (IQR, 3 to 10), respectively, and 6 videos met ≥ 4 of 5 reliability criteria. Overall, misleading videos were more popular than useful videos (median, 233 views/day; IQR, 26 to 523; vs. 8.3 views/day; IQR, 0.4 to 134.6; p<0.01). Culturally tailored videos were just as likely to be misleading and had similar GQS scores in comparison to nonculturally tailored videos (32.1% vs. 32.6% and 3 vs. 3, respectively). The quality of identified videos concerning type 2 diabetes was variable, and misleading videos were popular. Further creation and curation of high-quality video resources is required. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.
Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
2015-05-01
Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Giera, Brian; Bukosky, Scott; Lee, Elaine; ...
2018-01-23
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giera, Brian; Bukosky, Scott; Lee, Elaine
Here, quantitative color analysis is performed on videos of high contrast, low power reversible electrophoretic deposition (EPD)-based displays operated under different applied voltages. This analysis is coded in an open-source software, relies on a color differentiation metric, ΔE * 00, derived from digital video, and provides an intuitive relationship between the operating conditions of the devices and their performance. Time-dependent ΔE * 00 color analysis reveals color relaxation behavior, recoverability for different voltage sequences, and operating conditions that can lead to optimal performance.
Park, Jin-Seok; Kim, Min Su; Kim, HyungKil; Kim, Shin Il; Shin, Chun Ho; Lee, Hyun Jung; Lee, Won Seop; Moon, Soyoung
2016-06-17
High-quality bowel preparation is necessary for colonoscopy. A few studies have been conducted to investigate improvement in bowel preparation quality through patient education. However, the effect of patient education on bowel preparation has not been well studied. A randomized and prospective study was conducted. All patients received regular instruction for bowel preparation during a pre-colonoscopy visit. Those scheduled for colonoscopy were randomly assigned to view an educational video instruction (video group) on the day before the colonoscopy, or to a non-video (control) group. Qualities of bowel preparation using the Ottawa Bowel Preparation Quality scale (Ottawa score) were compared between the video and non-video groups. In addition, factors associated with poor bowel preparation were investigated. A total of 502 patients were randomized, 250 to the video group and 252 to the non-video group. The video group exhibited better bowel preparation (mean Ottawa total score: 3.03 ± 1.9) than the non-video group (4.21 ± 1.9; P < 0.001) and had good bowel preparation for colonoscopy (total Ottawa score <6: 91.6 % vs. 78.5 %; P < 0.001). Multivariate analysis revealed that males (odds ratio [OR] = 1.95, P = 0.029), diabetes mellitus patients (OR = 2.79, P = 0.021), and non-use of visual aids (OR = 3.09, P < 0.001) were associated with poor bowel preparation. In the comparison of the colonoscopic outcomes between groups, the polyp detection rate was not significantly different between video group and non-video group (48/250, 19.2 % vs. 48/252, 19.0 %; P = 0.963), but insertion time was significantly short in video group (5.5 ± 3.2 min) than non-video group (6.1 ± 3.7 min; P = 0.043). The addition of an educational video could improve the quality of bowel preparation in comparison with standard preparation method. Clinical Research Information Service KCT0001836 . The date of registration: March, 08(th), 2016, Retrospectively registered.
YouTube as an information source for pediatric adenotonsillectomy and ear tube surgery.
Sorensen, Jeffrey A; Pusz, Max D; Brietzke, Scott E
2014-01-01
Assess the overall quality of information on adenotonsillectomy and ear tube surgery presented on YouTube (www.youtube.com) from the perspective of a parent or patient searching for information on surgery. The YouTube website was systematically searched on select dates with a formal search strategy to identify videos pertaining to pediatric adenotonsillectomy and ear tube surgery. Only videos with at least 5 (ear tube surgery) or 10 (adenotonsillectomy) views per day were included. Each video was viewed and scored by two independent scorers. Videos were categorized by goal and scored for video/audio quality, accuracy, comprehensiveness, and procedure-specific content. Cross-sectional study. Public domain website. Fifty-five videos were scored for adenotonsillectomy and forty-seven for ear tube surgery. The most common category was educational (65.3%) followed by testimonial (28.4%), and news program (9.8%). Testimonials were more common for adenotonsillectomy than ear tube surgery (41.8% vs. 12.8%, p=0.001). Testimonials had a significantly lower mean accuracy (2.23 vs. 2.62, p=0.02), comprehensiveness (1.71 vs. 2.22, p=0.007), and TA specific content (0.64 vs. 1.69, p=0.001) score than educational type videos. Only six videos (5.9%) received high scores in both video/audio quality and accuracy/comprehensiveness of content. There was no significant association between the accuracy and comprehensive score and views, posted "likes", posted "dislikes", and likes/dislikes ratio. There was an association between "likes" and mean video quality (Spearman's rho=0.262, p=0.008). Parents/patients searching YouTube for information on pediatric adenotonsillectomy and ear tube surgery will generally encounter low quality information with testimonials being common but of significantly lower quality. Viewer perceived quality ("likes") did not correlate to formally scored content quality. Published by Elsevier Ireland Ltd.
Pressure-specific and multiple pressure response of fish assemblages in European running waters☆
Schinegger, Rafaela; Trautwein, Clemens; Schmutz, Stefan
2013-01-01
We classified homogenous river types across Europe and searched for fish metrics qualified to show responses to specific pressures (hydromorphological pressures or water quality pressures) vs. multiple pressures in these river types. We analysed fish taxa lists from 3105 sites in 16 ecoregions and 14 countries. Sites were pre-classified for 15 selected pressures to separate unimpacted from impacted sites. Hierarchical cluster analysis was used to split unimpacted sites into four homogenous river types based on species composition and geographical location. Classification trees were employed to predict associated river types for impacted sites with four environmental variables. We defined a set of 129 candidate fish metrics to select the best reacting metrics for each river type. The candidate metrics represented tolerances/intolerances of species associated with six metric types: habitat, migration, water quality sensitivity, reproduction, trophic level and biodiversity. The results showed that 17 uncorrelated metrics reacted to pressures in the four river types. Metrics responded specifically to water quality pressures and hydromorphological pressures in three river types and to multiple pressures in all river types. Four metrics associated with water quality sensitivity showed a significant reaction in up to three river types, whereas 13 metrics were specific to individual river types. Our results contribute to the better understanding of fish assemblage response to human pressures at a pan-European scale. The results are especially important for European river management and restoration, as it is necessary to uncover underlying processes and effects of human pressures on aquatic communities. PMID:24003262
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
A quality assessment of cardiac auscultation material on YouTube.
Camm, Christian F; Sunderland, Nicholas; Camm, A John
2013-02-01
YouTube is a highly utilized Web site that contains a large amount of medical educational material. Although some studies have assessed the education material contained on the Web site, little analysis of cardiology content has been made. This study aimed to assess the quality of videos relating to heart sounds and murmurs contained on YouTube. We hypothesized that the quality of video files purporting to provide education on heart auscultation would be highly variable. Videos were searched for using the terms "heart sounds," "heart murmur," and "heart auscultation." A built-in educational filter was employed, and manual rejection of non-English language and nonrelated videos was undertaken. Remaining videos were analyzed for content, and suitable videos were scored using a purpose-built tool. YouTube search located 3350 videos in total, and of these, 22 were considered suitable for scoring. The average score was 4.07 out of 7 (standard deviation, 1.35). Six videos scored 5.5 or greater and 5 videos scoring 2.5 or less. There was no correlation between video score and YouTube indices of preference (hits, likes, dislikes, or search page). The quality of videos found in this study was highly variable. YouTube indications of preference were of no value in determining the value of video content. Therefore, teaching institutions or professional societies should endeavor to identify and highlight good online teaching resources. YouTube contains many videos relating to cardiac auscultation, but very few are valuable education resources. © 2012 Wiley Periodicals, Inc.
Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.
Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina
2011-10-01
Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.
Quality of service routing in the differentiated services framework
NASA Astrophysics Data System (ADS)
Oliveira, Marilia C.; Melo, Bruno; Quadros, Goncalo; Monteiro, Edmundo
2001-02-01
In this paper we present a quality of service routing strategy for network where traffic differentiation follows the class-based paradigm, as in the Differentiated Services framework. This routing strategy is based on a metric of quality of service. This metric represents the impact that delay and losses verified at each router in the network have in application performance. Based on this metric, it is selected a path for each class according to the class sensitivity to delay and losses. The distribution of the metric is triggered by a relative criterion with two thresholds, and the values advertised are the moving average of the last values measured.
Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman
2008-08-04
Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.
YouTube Videos on Botulinum Toxin A for Wrinkles: A Useful Resource for Patient Education.
Wong, Katharine; Doong, Judy; Trang, Trinh; Joo, Sarah; Chien, Anna L
2017-12-01
Patients interested in botulinum toxin type A (BTX-A) for wrinkles search for videos on YouTube, but little is known about the quality and reliability of the content. The authors examined the quality, reliability, content, and target audience of YouTube videos on BTX for wrinkles. In this cross-sectional study, the term "Botox" was searched on YouTube. Sixty relevant videos in English were independently categorized by 2 reviewers as useful informational, misleading informational, useful patient view, or misleading patient view. Disagreements were settled by a third reviewer. Videos were rated on the Global Quality Scale (GQS) (1 = poor, 5 = excellent). Sixty-three percent of the BTX YouTube videos were useful informational (GQS = 4.4 ± 0.7), 33% as useful patient view (GQS = 3.21 ± 1.2), 2% as misleading informational (GQS = 1), and 2% as misleading patient view (GQS = 2.5). The large number of useful videos, high reliability, and the wide range of content covered suggests that those who search for antiwrinkle BTX videos on YouTube are likely to view high-quality content. This suggests that YouTube may be a good source of videos to recommend for patients interested in BTX.
NASA Technical Reports Server (NTRS)
Scott, D. W.
1994-01-01
This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
Research and Technology Development for Construction of 3d Video Scenes
NASA Astrophysics Data System (ADS)
Khlebnikova, Tatyana A.
2016-06-01
For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.
Thin-slice vision: inference of confidence measure from perceptual video quality
NASA Astrophysics Data System (ADS)
Hameed, Abdul; Balas, Benjamin; Dai, Rui
2016-11-01
There has been considerable research on thin-slice judgments, but no study has demonstrated the predictive validity of confidence measures when assessors watch videos acquired from communication systems, in which the perceptual quality of videos could be degraded by limited bandwidth and unreliable network conditions. This paper studies the relationship between high-level thin-slice judgments of human behavior and factors that contribute to perceptual video quality. Based on a large number of subjective test results, it has been found that the confidence of a single individual present in all the videos, called speaker's confidence (SC), could be predicted by a list of features that contribute to perceptual video quality. Two prediction models, one based on artificial neural network and the other based on a decision tree, were built to predict SC. Experimental results have shown that both prediction models can result in high correlation measures.
Visual Perception Based Rate Control Algorithm for HEVC
NASA Astrophysics Data System (ADS)
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
Rice, Sean C; Higginbotham, Tina; Dean, Melanie J; Slaughter, James C; Yachimski, Patrick S; Obstein, Keith L
2016-11-01
Successful outpatient colonoscopy (CLS) depends on many factors including the quality of a patient's bowel preparation. Although education on consumption of the pre-CLS purgative can improve bowel preparation quality, no study has evaluated dietary education alone. We have created an educational video on pre-CLS dietary instructions to determine whether dietary education would improve outpatient bowel preparation quality. A prospective randomized, blinded, controlled study of patients undergoing outpatient CLS was performed. All patients received a 4 l polyethylene glycol-based split-dose bowel preparation and standard institutional pre-procedure instructions. Patients were then randomly assigned to an intervention arm or to a no intervention arm. A 4-min educational video detailing clear liquid diet restriction was made available to patients in the intervention arm, whereas those randomized to no intervention did not have access to the video. Patients randomized to the video were provided with the YouTube video link 48-72 h before CLS. An attending endoscopist blinded to randomization performed the CLS. Bowel preparation quality was scored using the Boston Bowel Preparation Scale (BBPS). Adequate preparation was defined as a BBPS total score of ≥6 with all segment scores ≥2. Wilcoxon rank-sum and Pearson's χ 2 -tests were performed to assess differences between groups. Ninety-two patients were randomized (video: n=42; control: n=50) with 47 total video views being tallied. There were no demographic differences between groups. There was no statistically significant difference in adequate preparation between groups (video=74%; control=68%; P=0.54). The availability of a supplementary patient educational video on clear liquid diet alone was insufficient to improve bowel preparation quality when compared with standard pre-procedure instruction at our institution.
NASA Astrophysics Data System (ADS)
O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram
2015-07-01
Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.
NASA Astrophysics Data System (ADS)
O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram
2015-07-01
Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.
JPEG2000 still image coding quality.
Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei
2013-10-01
This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.
Nicol, Sam; Wiederholt, Ruscena; Diffendorfer, James E.; Mattsson, Brady; Thogmartin, Wayne E.; Semmens, Darius J.; Laura Lopez-Hoffman,; Norris, Ryan
2016-01-01
Mobile species with complex spatial dynamics can be difficult to manage because their population distributions vary across space and time, and because the consequences of managing particular habitats are uncertain when evaluated at the level of the entire population. Metrics to assess the importance of habitats and pathways connecting habitats in a network are necessary to guide a variety of management decisions. Given the many metrics developed for spatially structured models, it can be challenging to select the most appropriate one for a particular decision. To guide the management of spatially structured populations, we define three classes of metrics describing habitat and pathway quality based on their data requirements (graph-based, occupancy-based, and demographic-based metrics) and synopsize the ecological literature relating to these classes. Applying the first steps of a formal decision-making approach (problem framing, objectives, and management actions), we assess the utility of metrics for particular types of management decisions. Our framework can help managers with problem framing, choosing metrics of habitat and pathway quality, and to elucidate the data needs for a particular metric. Our goal is to help managers to narrow the range of suitable metrics for a management project, and aid in decision-making to make the best use of limited resources.
Reference View Selection in DIBR-Based Multiview Coding.
Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice
2016-04-01
Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.
Safety considerations in providing allergen immunotherapy in the office.
Mattos, Jose L; Lee, Stella
2016-06-01
This review highlights the risks of allergy immunotherapy, methods to improve the quality and safety of allergy treatment, the current status of allergy quality metrics, and the future of quality measurement. In the current healthcare environment, the emphasis on outcomes measurement is increasing, and providers must be better equipped in the development, measurement, and reporting of safety and quality measures. Immunotherapy offers the only potential cure for allergic disease and asthma. Although well tolerated and effective, immunotherapy can be associated with serious consequence, including anaphylaxis and death. Many predisposing factors and errors that lead to serious systemic reactions are preventable, and the evaluation and implementation of quality measures are crucial to developing a safe immunotherapy practice. Although quality metrics for immunotherapy are in their infancy, they will become increasingly sophisticated, and providers will face increased pressure to deliver safe, high-quality, patient-centered, evidence-based, and efficient allergy care. The establishment of safety in the allergy office involves recognition of potential risk factors for anaphylaxis, the development and measurement of quality metrics, and changing systems-wide practices if needed. Quality improvement is a continuous process, and although national allergy-specific quality metrics do not yet exist, they are in development.
Ring-push metric learning for person reidentification
NASA Astrophysics Data System (ADS)
He, Botao; Yu, Shaohua
2017-05-01
Person reidentification (re-id) has been widely studied because of its extensive use in video surveillance and forensics applications. It aims to search a specific person among a nonoverlapping camera network, which is highly challenging due to large variations in the cluttered background, human pose, and camera viewpoint. We present a metric learning algorithm for learning a Mahalanobis distance for re-id. Generally speaking, there exist two forces in the conventional metric learning process, one pulling force that pulls points of the same class closer and the other pushing force that pushes points of different classes as far apart as possible. We argue that, when only a limited number of training data are given, forcing interclass distances to be as large as possible may drive the metric to overfit the uninformative part of the images, such as noises and backgrounds. To alleviate overfitting, we propose the ring-push metric learning algorithm. Different from other metric learning methods that only punish too small interclass distances, in the proposed method, both too small and too large inter-class distances are punished. By introducing the generalized logistic function as the loss, we formulate the ring-push metric learning as a convex optimization problem and utilize the projected gradient descent method to solve it. The experimental results on four public datasets demonstrate the effectiveness of the proposed algorithm.
YouTube as a Source of Information on Neurosurgery.
Samuel, Nardin; Alotaibi, Naif M; Lozano, Andres M
2017-09-01
The importance of videos in social media communications in the context of health care and neurosurgery is becoming increasingly recognized. However, there has not yet been a systematic analysis of these neurosurgery-related communications. Accordingly, this study was aimed at characterizing the online video content pertaining to neurosurgery. Neurosurgery-related videos uploaded on YouTube were collected using a comprehensive search strategy. The following metrics were extracted for each video: number of views, likes, dislikes, comments, shares, date of upload, and geographic region of origin where specified. A quantitative and qualitative evaluation was performed on all videos included in the study. A total of 713 nonduplicate videos met the inclusion criteria. The overall number of views for all videos was 90,545,164. Videos were most frequently uploaded in 2016 (n = 348), with a 200% increase in uploads compared with the previous year. Of the videos that were directly relevant to clinical neurosurgery, the most frequent video categories were "educational videos" (25%), followed by "surgical and procedure overview" (20%), "promotional videos" (17%), and "patient experience" (16%). The remainder of the videos consisted primarily of unrealistic simulations of cranial surgery for entertainment purposes (20%). The findings from this study highlight the increasing use of video communications related to neurosurgery and show that institutions, neurosurgeons, and patients are using YouTube as an educational and promotional platform. As online communications continue to evolve, it will be important to harness this tool to advance patient-oriented communication and knowledge dissemination in neurosurgery. Copyright © 2017 Elsevier Inc. All rights reserved.
Peltier, Alexandre; Aoun, Fouad; Ameye, Filip; Andrianne, Robert; De Meerleer, Gert; Denis, Louis; Joniau, Steven; Lambrecht, Antoon; Billiet, Ignace; Vanderdonck, Frank; Roumeguère, Thierry; Van Velthoven, Roland
2015-09-01
This large multicenter study aimed to assess the impact of the use of multimedia tools on the duration and the quality of the conversation between healthcare providers (urologists, radiotherapists and nurses) and their patients. 30 urological centers in Belgium used either videos or other instructive tools in their consultation with prostate cancer patients. Each consultation was evaluated for duration and quality using a visual analog scale. In total, 905 patient visits were evaluated: 447 without and 458 with video support. During consultations with video support, an average of 2.3 videos was shown. Video support was judged to be practical and to improve the quality of consultations, without loss of time, regardless of patient age or stage of disease management (p > 0.05). Healthcare providers indicate that the use of videos improved patient comprehension about prostate cancer, as well as the quality information exchange, without increasing consultation time. The use of video material was feasible in daily practice, and was easy to understand, relevant and culturally appropriate, even for the most elderly men. Multimedia education also helped to empower men to actively participate in their healthcare and treatment discussions. Ipsen NV.
Trans-Pacific tele-ultrasound image transmission of fetal central nervous system structures.
Ferreira, Adilson Cunha; Araujo Júnior, Edward; Martins, Wellington P; Jordão, João Francisco; Oliani, Antônio Hélio; Meagher, Simon E; Da Silva Costa, Fabricio
2015-01-01
To assess the quality of images and video clips of fetal central nervous (CNS) structures obtained by ultrasound and transmitted via tele-ultrasound from Brazil to Australia. In this cross-sectional study, 15 normal singleton pregnant women between 20 and 26 weeks were selected. Fetal CNS structures were obtained by images and video clips. The exams were transmitted in real-time using a broadband internet and an inexpensive video streaming device. Four blinded examiners evaluated the quality of the exams using the Likert scale. We calculated the mean, standard deviation, mean difference, and p values were obtained from paired t tests. The quality of the original video clips was slightly better than that observed by the transmitted video clips; mean difference considering all observers = 0.23 points. In 47/60 comparisons (78.3%; 95% CI = 66.4-86.9%) the quality of the video clips were judged to be the same. In 182/240 still images (75.8%; 95% CI = 70.0-80.8%) the scores of transmitted image were considered the same as the original. We demonstrated that long distance tele-ultrasound transmission of fetal CNS structures using an inexpensive video streaming device provided images of subjective good quality.
Validation of a Quality Management Metric
2000-09-01
quality management metric (QMM) was used to measure the performance of ten software managers on Department of Defense (DoD) software development programs. Informal verification and validation of the metric compared the QMM score to an overall program success score for the entire program and yielded positive correlation. The results of applying the QMM can be used to characterize the quality of software management and can serve as a template to improve software management performance. Future work includes further refining the QMM, applying the QMM scores to provide feedback
How useful is YouTube in learning heart anatomy?
Raikos, Athanasios; Waidyasekara, Pasan
2014-01-01
Nowadays more and more modern medical degree programs focus on self-directed and problem-based learning. That requires students to search for high quality and easy to retrieve online resources. YouTube is an emerging platform for learning human anatomy due to easy access and being a free service. The purpose of this study is to make a quantitative and qualitative analysis of the available human heart anatomy videos on YouTube. Using the search engine of the platform we searched for relevant videos using various keywords. Videos with irrelevant content, animal tissue, non-English language, no sound, duplicates, and physiology focused were excluded from further elaboration. The initial search retrieved 55,525 videos, whereas only 294 qualified for further analysis. A unique scoring system was used to assess the anatomical quality and details, general quality, and the general data for each video. Our results indicate that the human heart anatomy videos available on YouTube conveyed our anatomical criteria poorly, whereas the general quality scoring found borderline. Students should be selective when looking up on public video databases as it can prove challenging, time consuming, and the anatomical information may be misleading due to absence of content review. Anatomists and institutions are encouraged to prepare and endorse good quality material and make them available online for the students. The scoring rubric used in the study comprises a valuable tool to faculty members for quality evaluation of heart anatomy videos available on social media platforms. Copyright © 2013 American Association of Anatomists.
No-reference image quality assessment for horizontal-path imaging scenarios
NASA Astrophysics Data System (ADS)
Rios, Carlos; Gladysz, Szymon
2013-05-01
There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.
Segment scheduling method for reducing 360° video streaming latency
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan
2017-09-01
360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video streaming methods. The proposed dual buffer segment scheduling method is implemented in an end-to-end tile based 360° viewports adaptive video streaming platform, where the entire 360° video is divided into a number of tiles, and each tile is independently encoded into multiple quality level representations. The client requests different quality level representations of each tile based on the viewer's head orientation and the available bandwidth, and then composes all tiles together for rendering. The simulation results verify that the proposed dual buffer segment scheduling algorithm reduces the viewport switch latency, and utilizes available bandwidth more efficiently. As a result, a more consistent immersive 360° video viewing experience can be presented to the user.
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
Rössler, Bernhard; Lahner, Daniel; Schebesta, Karl; Chiari, Astrid; Plöchl, Walter
2012-07-01
The Internet has become the largest, most up-to-date source for medical information. Besides enhancing patients' knowledge, the freely accessible audio-visual files have an impact on medical education. However little is known about their characteristics. In this manuscript the quality of lumbar puncture (LP) and spinal anaesthesia (SA) videos available on YouTube is assessed. This retrospective analysis was based on a search for LP and SA on YouTube. Videos were evaluated using essential key points (5 in SA, 4 in LP) and 3 safety indicators. Furthermore, violation of sterile working techniques and a rating whether the video must be regarded as dangerously misleading was performed. From 2321 hits matching the keywords, 38 videos were eligible for evaluation. In LP videos, 14% contained information on all, 4.5% on 3 and 4.5% on 2 key points, 59% on 1 and 18% on no key point. Regarding SA, no video contained information on all 5 key points, 56% on 2-4 and 25% on 1 key point, 19% did not contain any essential information. A sterility violation occurred in 11%, and 13% were classified as dangerously misleading. Even though high quality videos are available, the quality of video clips is generally low. The fraction of videos that were not performed in an aseptic manner is low, but these pose a substantial risk to patients. Consequently, more high-quality, institutional medical learning videos must be made available in the light of the increased utilization on the Internet. Copyright © 2012 Elsevier B.V. All rights reserved.
Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Turel, O; Romashkin, A; Morrison, K M
2017-08-01
There is a growing need to curb paediatric obesity. The aim of this study is to untangle associations between video-game-use attributes and obesity as a first step towards identifying and examining possible interventions. Cross-sectional time-lagged cohort study was employed using parent-child surveys (t1) and objective physical activity and physiological measures (t2) from 125 children/adolescents (mean age = 13.06, 9-17-year-olds) who play video games, recruited from two clinics at a Canadian academic children's hospital. Structural equation modelling and analysis of covariance were employed for inference. The results of the study are as follows: (i) self-reported video-game play duration in the 4-h window before bedtime is related to greater abdominal adiposity (waist-to-height ratio) and this association may be mediated through reduced sleep quality (measured with the Pittsburgh Sleep Quality Index); and (ii) self-reported average video-game session duration is associated with greater abdominal adiposity and this association may be mediated through higher self-reported sweet drinks consumption while playing video games and reduced sleep quality. Video-game play duration in the 4-h window before bedtime, typical video-game session duration, sweet drinks consumption while playing video games and poor sleep quality have aversive associations with abdominal adiposity. Paediatricians and researchers should further explore how these factors can be altered through behavioural or pharmacological interventions as a means to reduce paediatric obesity. © 2017 World Obesity Federation.
Software Quality Assurance Metrics
NASA Technical Reports Server (NTRS)
McRae, Kalindra A.
2004-01-01
Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.
NASA Astrophysics Data System (ADS)
Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
Hemmati Maslakpak, Masumeh; Shams, Shadi
2015-01-01
Background End stage renal disease negatively affects the patients’ quality of life. There are different educational methods to help these patients. This study was performed to compare the effectiveness of self-care education in two methods, face to face and video educational, on the quality of life in patients under treatment by hemodialysis in education-medical centers in Urmia. Methods In this quasi-experimental study, 120 hemodialysis patients were selected randomly; they were then randomly allocated to three groups: the control, face to face education and video education. For face to face group, education was given individually in two sessions of 35 to 45 minutes. For video educational group, CD was shown. Kidney Disease Quality Of Life- Short Form (KDQOL-SF) questionnaire was filled out before and two months after the intervention. Data analysis was performed in SPSS software by using one-way ANOVA. Results ANOVA test showed a statistically significant difference in the quality of life scores among the three groups after the intervention (P=0.024). After the intervention, Tukey’s post-hoc test showed a statistically significant difference between the two groups of video and face to face education regarding the quality of life (P>0.05). Conclusion Implementation of the face to face and video education methods improves the quality of life in hemodialysis patients. So, it is suggested that video educational should be used along with face to face education. PMID:26171412
Dance and Music in “Gangnam Style”: How Dance Observation Affects Meter Perception
Lee, Kyung Myun; Barrett, Karen Chan; Kim, Yeonhwa; Lim, Yeoeun; Lee, Kyogu
2015-01-01
Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT’s at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy’s “Gangnam Style” in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied familiarity with dance choreography may facilitate meter awareness. Results shed light on the processing of multimedia environments. PMID:26308092
About subjective evaluation of adaptive video streaming
NASA Astrophysics Data System (ADS)
Tavakoli, Samira; Brunnström, Kjell; Garcia, Narciso
2015-03-01
The usage of HTTP Adaptive Streaming (HAS) technology by content providers is increasing rapidly. Having available the video content in multiple qualities, using HAS allows to adapt the quality of downloaded video to the current network conditions providing smooth video-playback. However, the time-varying video quality by itself introduces a new type of impairment. The quality adaptation can be done in different ways. In order to find the best adaptation strategy maximizing users perceptual quality it is necessary to investigate about the subjective perception of adaptation-related impairments. However, the novelties of these impairments and their comparably long time duration make most of the standardized assessment methodologies fall less suited for studying HAS degradation. Furthermore, in traditional testing methodologies, the quality of the video in audiovisual services is often evaluated separated and not in the presence of audio. Nevertheless, the requirement of jointly evaluating the audio and the video within a subjective test is a relatively under-explored research field. In this work, we address the research question of determining the appropriate assessment methodology to evaluate the sequences with time-varying quality due to the adaptation. This was done by studying the influence of different adaptation related parameters through two different subjective experiments using a methodology developed to evaluate long test sequences. In order to study the impact of audio presence on quality assessment by the test subjects, one of the experiments was done in the presence of audio stimuli. The experimental results were subsequently compared with another experiment using the standardized single stimulus Absolute Category Rating (ACR) methodology.
NASA Astrophysics Data System (ADS)
Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte
2007-01-01
We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.
Evaluation Study of a Wireless Multimedia Traffic-Oriented Network Model
NASA Astrophysics Data System (ADS)
Vasiliadis, D. C.; Rizos, G. E.; Vassilakis, C.
2008-11-01
In this paper, a wireless multimedia traffic-oriented network scheme over a fourth generation system (4-G) is presented and analyzed. We conducted an extensive evaluation study for various mobility configurations in order to incorporate the behavior of the IEEE 802.11b standard over a test-bed wireless multimedia network model. In this context, the Quality of Services (QoS) over this network is vital for providing a reliable high-bandwidth platform for data-intensive sources like video streaming. Therefore, the main issues concerned in terms of QoS were the metrics for bandwidth of both dropped and lost packets and their mean packet delay under various traffic conditions. Finally, we used a generic distance-vector routing protocol which was based on an implementation of Distributed Bellman-Ford algorithm. The performance of the test-bed network model has been evaluated by using the simulation environment of NS-2.
Stochastic Packet Loss Model to Evaluate QoE Impairments
NASA Astrophysics Data System (ADS)
Hohlfeld, Oliver
With provisioning of broadband access for mass market—even in wireless and mobile networks—multimedia content, especially real-time streaming of high-quality audio and video, is extensively viewed and exchanged over the Internet. Quality of Experience (QoE) aspects, describing the service quality perceived by the user, is a vital factor in ensuring customer satisfaction in today's communication networks. Frameworks for accessing quality degradations in streamed video currently are investigated as a complex multi-layered research topic, involving network traffic load, codec functions and measures of user perception of video quality.
A qualitative analysis of methotrexate self-injection education videos on YouTube.
Rittberg, Rebekah; Dissanayake, Tharindri; Katz, Steven J
2016-05-01
The aim of this study is to identify and evaluate the quality of videos for patients available on YouTube for learning to self-administer subcutaneous methotrexate. Using the search term "Methotrexate injection," two clinical reviewers analyzed the first 60 videos on YouTube. Source and search rank of video, audience interaction, video duration, and time since video was uploaded on YouTube were recorded. Videos were classified as useful, misleading, or a personal patient view. Videos were rated for reliability, comprehensiveness, and global quality scale (GQS). Reasons for misleading videos were documented, and patient videos were documented as being either positive or negative towards methotrexate (MTX) injection. Fifty-one English videos overlapped between the two geographic locations; 10 videos were classified as useful (19.6 %), 14 misleading (27.5 %), and 27 personal patient view (52.9 %). Total views of videos were 161,028: 19.2 % useful, 72.8 % patient, and 8.0 % misleading. Mean GQS: 4.2 (±1.0) useful, 1.6 (±1.1) misleading, and 2.0 (±0.9) for patient videos (p < 0.0001). Mean reliability: 3.3 (±0.6) useful, 0.9 (±1.2) misleading, and 1.0 (±0.7) for patient videos (p < 0.0001). Comprehensiveness: 2.2 (±1.9) useful, 0.1 (±0.3) misleading, and 1.5 (±1.5) for patient view videos (p = 0.0027). This study demonstrates a minority of videos are useful for teaching MTX injection. Further, video quality does not correlate with video views. While web video may be an additional educational tool available, clinicians need to be familiar with specific resources to help guide and educate their patients to ensure best outcomes.
Hudali, Tamer; Bhattarai, Mukul; Deckard, Alan; Hingle, Susan
2017-01-01
Background Hospital medicine is a relatively new specialty field, dedicated to the delivery of comprehensive medical care to hospitalized patients. YouTube is one of the most frequently used websites, offering access to a gamut of videos from self-produced to professionally made. Objective The aim of our study was to determine the adequacy of YouTube as an effective means to define and depict the role of hospitalists. Methods YouTube was searched on November 17, 2014, using the following search words: “hospitalist,” “hospitalist definition,” “what is the role of a hospitalist,” “define hospitalist,” and “who is a hospitalist.” Videos found only in the first 10 pages of each search were included. Non-English, noneducational, and nonrelevant videos were excluded. A novel 7-point scoring tool was created by the authors based on the definition of a hospitalist adopted by the Society of Hospital Medicine. Three independent reviewers evaluated, scored, and classified the videos into high, intermediate, and low quality based on the average score. Results A total of 102 videos out of 855 were identified as relevant and included in the analysis. Videos uploaded by academic institutions had the highest mean score. Only 6 videos were classified as high quality, 53 as intermediate quality, and 42 as low quality, with 82.4% (84/102) of the videos scoring an average of 4 or less. Conclusions Most videos found in the search of a hospitalist definition are inadequate. Leading medical organizations and academic institutions should consider producing and uploading quality videos to YouTube to help patients and their families better understand the roles and definition of the hospitalist. PMID:28073738
Productivity in Pediatric Palliative Care: Measuring and Monitoring an Elusive Metric.
Kaye, Erica C; Abramson, Zachary R; Snaman, Jennifer M; Friebert, Sarah E; Baker, Justin N
2017-05-01
Workforce productivity is poorly defined in health care. Particularly in the field of pediatric palliative care (PPC), the absence of consensus metrics impedes aggregation and analysis of data to track workforce efficiency and effectiveness. Lack of uniformly measured data also compromises the development of innovative strategies to improve productivity and hinders investigation of the link between productivity and quality of care, which are interrelated but not interchangeable. To review the literature regarding the definition and measurement of productivity in PPC; to identify barriers to productivity within traditional PPC models; and to recommend novel metrics to study productivity as a component of quality care in PPC. PubMed ® and Cochrane Database of Systematic Reviews searches for scholarly literature were performed using key words (pediatric palliative care, palliative care, team, workforce, workflow, productivity, algorithm, quality care, quality improvement, quality metric, inpatient, hospital, consultation, model) for articles published between 2000 and 2016. Organizational searches of Center to Advance Palliative Care, National Hospice and Palliative Care Organization, National Association for Home Care & Hospice, American Academy of Hospice and Palliative Medicine, Hospice and Palliative Nurses Association, National Quality Forum, and National Consensus Project for Quality Palliative Care were also performed. Additional semistructured interviews were conducted with directors from seven prominent PPC programs across the U.S. to review standard operating procedures for PPC team workflow and productivity. Little consensus exists in the PPC field regarding optimal ways to define, measure, and analyze provider and program productivity. Barriers to accurate monitoring of productivity include difficulties with identification, measurement, and interpretation of metrics applicable to an interdisciplinary care paradigm. In the context of inefficiencies inherent to traditional consultation models, novel productivity metrics are proposed. Further research is needed to determine optimal metrics for monitoring productivity within PPC teams. Innovative approaches should be studied with the goal of improving efficiency of care without compromising value. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
López-Sendón, José; González-Juanatey, José Ramón; Pinto, Fausto; Cuenca Castillo, José; Badimón, Lina; Dalmau, Regina; González Torrecilla, Esteban; López-Mínguez, José Ramón; Maceira, Alicia M; Pascual-Figal, Domingo; Pomar Moya-Prats, José Luis; Sionis, Alessandro; Zamorano, José Luis
2015-11-01
Cardiology practice requires complex organization that impacts overall outcomes and may differ substantially among hospitals and communities. The aim of this consensus document is to define quality markers in cardiology, including markers to measure the quality of results (outcomes metrics) and quality measures related to better results in clinical practice (performance metrics). The document is mainly intended for the Spanish health care system and may serve as a basis for similar documents in other countries. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Enciso, Silvia; Gómez, Enrique J
2017-02-01
The aim of this study is to present the construct and concurrent validity of a motion-tracking method of laparoscopic instruments based on an optical pose tracker and determine its feasibility as an objective assessment tool of psychomotor skills during laparoscopic suturing. A group of novice ([Formula: see text] laparoscopic procedures), intermediate (11-100 laparoscopic procedures) and experienced ([Formula: see text] laparoscopic procedures) surgeons performed three intracorporeal sutures on an ex vivo porcine stomach. Motion analysis metrics were recorded using the proposed tracking method, which employs an optical pose tracker to determine the laparoscopic instruments' position. Construct validation was measured for all 10 metrics across the three groups and between pairs of groups. Concurrent validation was measured against a previously validated suturing checklist. Checklists were completed by two independent surgeons over blinded video recordings of the task. Eighteen novices, 15 intermediates and 11 experienced surgeons took part in this study. Execution time and path length travelled by the laparoscopic dissector presented construct validity. Experienced surgeons required significantly less time ([Formula: see text]), travelled less distance using both laparoscopic instruments ([Formula: see text]) and made more efficient use of the work space ([Formula: see text]) compared with novice and intermediate surgeons. Concurrent validation showed strong correlation between both the execution time and path length and the checklist score ([Formula: see text] and [Formula: see text], [Formula: see text]). The suturing performance was successfully assessed by the motion analysis method. Construct and concurrent validity of the motion-based assessment method has been demonstrated for the execution time and path length metrics. This study demonstrates the efficacy of the presented method for objective evaluation of psychomotor skills in laparoscopic suturing. However, this method does not take into account the quality of the suture. Thus, future works will focus on developing new methods combining motion analysis and qualitative outcome evaluation to provide a complete performance assessment to trainees.
Surgical gesture classification from video and kinematic data.
Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René
2013-10-01
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.
Visual quality analysis for images degraded by different types of noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Ieremeyev, Oleg I.; Egiazarian, Karen O.; Astola, Jaakko T.
2013-02-01
Modern visual quality metrics take into account different peculiarities of the Human Visual System (HVS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNRHVS- M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HVS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HVS-metrics.
Impact of artifact removal on ChIP quality metrics in ChIP-seq and ChIP-exo data
Carroll, Thomas S.; Liang, Ziwei; Salama, Rafik; Stark, Rory; de Santiago, Ines
2014-01-01
With the advent of ChIP-seq multiplexing technologies and the subsequent increase in ChIP-seq throughput, the development of working standards for the quality assessment of ChIP-seq studies has received significant attention. The ENCODE consortium's large scale analysis of transcription factor binding and epigenetic marks as well as concordant work on ChIP-seq by other laboratories has established a new generation of ChIP-seq quality control measures. The use of these metrics alongside common processing steps has however not been evaluated. In this study, we investigate the effects of blacklisting and removal of duplicated reads on established metrics of ChIP-seq quality and show that the interpretation of these metrics is highly dependent on the ChIP-seq preprocessing steps applied. Further to this we perform the first investigation of the use of these metrics for ChIP-exo data and make recommendations for the adaptation of the NSC statistic to allow for the assessment of ChIP-exo efficiency. PMID:24782889
Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video
NASA Astrophysics Data System (ADS)
Boyce, Jill; Xu, Qian
2017-09-01
Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.
Hickman, Simon J
2016-03-01
Internet video sharing sites allow the free dissemination of educational material. This study investigated the quality and educational content of videos of eye movement disorders posted on such sites. Educational neurological eye movement videos were identified by entering the titles of the eye movement abnormality into the search boxes of the video sharing sites. Also, suggested links were followed from each video. The number of views, likes, and dislikes for each video were recorded. The videos were then rated for their picture and sound quality. Their educational value was assessed according to whether the video included a description of the eye movement abnormality, the anatomical location of the lesion (if appropriate), and the underlying diagnosis. Three hundred fifty-four of these videos were found on YouTube and Vimeo. There was a mean of 6,443 views per video (range, 1-195,957). One hundred nineteen (33.6%) had no form of commentary about the eye movement disorder shown apart from the title. Forty-seven (13.3%) contained errors in the title or in the text. Eighty (22.6%) had excellent educational value by describing the eye movement abnormality, the anatomical location of the lesion, and the underlying diagnosis. Of these, 30 also had good picture and sound quality. The videos with excellent educational value had a mean of 9.84 "likes" per video compared with 2.37 for those videos without a commentary (P < 0.001). The videos that combined excellent educational value with good picture and sound quality had a mean of 10.23 "likes" per video (P = 0.004 vs videos with no commentary). There was no significant difference in the mean number of "dislikes" between those videos that had no commentary or which contained errors and those with excellent educational value. There are a large number of eye movement videos freely available on these sites; however, due to the lack of peer review, a significant number have poor educational value due to having no commentary or containing errors. The number of "likes" can help to identify videos with excellent educational value but the number of "dislikes" does not help in discerning which videos have poor educational value.
Learning a Continuous-Time Streaming Video QoE Model.
Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C
2018-05-01
Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.
75 FR 5040 - Extension of Period for Comments on Enhancement in the Quality of Patents
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... patents, to identify appropriate indicia of quality, and to establish metrics for the measurement of the... issued patents, to identify appropriate indicia of quality, and to establish metrics for the measurement.... Kappos, Under Secretary of Commerce for Intellectual Property and Director of the United States Patent...
Deal, Shanley B; Alseidi, Adnan A
2017-12-01
Online videos are among the most common resources for case preparation. Using crowd sourcing, we evaluated the relationship between operative quality and viewing characteristics of online laparoscopic cholecystectomy videos. We edited 160 online videos of laparoscopic cholecystectomy to 60 seconds or less. Crowd workers (CW) rated videos using Global Objective Assessment of Laparoscopic Skills (GOALS), the critical view of safety (CVS) criteria, and assigned overall pass/fail ratings if CVS was achieved; linear mixed effects models derived average ratings. Views, likes, dislikes, subscribers, and country were recorded for subset analysis of YouTube videos. Spearman correlation coefficient (SCC) assessed correlation between performance measures. One video (0.06%) achieved a passing CVS score of ≥5; 23%, ≥4; 44%, ≥3; 79%, ≥2; and 100% ≥1. Pass/fail ratings correlated to CVS, SCC 0.95 (p < 0.001) and to GOALS, SCC 0.79 (p < 0.001). YouTube videos (n = 139) with higher views, likes, or subscribers did not correlate with better quality. The average CVS and GOALS scores were no different for videos with >20,000 views (22%) compared with those with <20,000 (78%). There is an incredibly low frequency of CVS and average GOALS technical performance in frequently used online surgical videos of LC. Favorable characteristics, such as number of views or likes, do not translate to higher quality. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M
2018-06-01
Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu
2017-01-01
Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112
The State of Simulations: Soft-Skill Simulations Emerge as a Powerful New Form of E-Learning.
ERIC Educational Resources Information Center
Aldrich, Clark
2001-01-01
Presents responses of leaders from six simulation companies about challenges and opportunities of soft-skills simulations in e-learning. Discussion includes: evaluation metrics; role of subject matter experts in developing simulations; video versus computer graphics; technology needed to run simulations; technology breakthroughs; pricing;…
Assessing the quality of restored images in optical long-baseline interferometry
NASA Astrophysics Data System (ADS)
Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric
2017-03-01
Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
2014-05-01
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.
Hadjisolomou, Stavros P; El-Haddad, George
2017-01-01
Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, "SpotMetrics," that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines.
NASA Astrophysics Data System (ADS)
Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi
2005-10-01
MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.
Measuring perceived video quality of MPEG enhancement by people with impaired vision
Fullerton, Matthew; Woods, Russell L.; Vera-Diaz, Fuensanta A.; Peli, Eli
2007-01-01
We used a new method to measure the perceived quality of contrast-enhanced motion video. Patients with impaired vision (n = 24) and normally-sighted subjects (n = 6) adjusted the level of MPEG-based enhancement of 8 videos (4 minutes each) drawn from 4 categories. They selected the level of enhancement that provided the preferred view of the videos, using a reducing-step-size staircase procedure. Most patients made consistent selections of the preferred level of enhancement, indicating an appreciation of and a perceived benefit from the MPEG-based enhancement. The selections varied between patients and were correlated with letter contrast sensitivity, but the selections were not affected by training, experience or video category. We measured just noticeable differences (JNDs) directly for videos, and mapped the image manipulation (enhancement in our case) onto an approximately linear perceptual space. These tools and approaches will be of value in other evaluations of the image quality of motion video manipulations. PMID:18059909
YouTube videos as a source of medical information during the Ebola hemorrhagic fever epidemic.
Nagpal, Sajan Jiv Singh; Karimianpour, Ahmadreza; Mukhija, Dhruvika; Mohan, Diwakar; Brateanu, Andrei
2015-01-01
The content and quality of medical information available on video sharing websites such as YouTube is not known. We analyzed the source and quality of medical information about Ebola hemorrhagic fever (EHF) disseminated on YouTube and the video characteristics that influence viewer behavior. An inquiry for the search term 'Ebola' was made on YouTube. The first 100 results were arranged in decreasing order of "relevance" using the default YouTube algorithm. Videos 1-50 and 51-100 were allocated to a high relevance (HR), and a low relevance (LR) video group, respectively. Multivariable logistic regression models were used to assess the predictors of a video being included in the HR vs. LR groups. Fourteen videos were excluded because they were parodies, songs or stand-up comedies (n = 11), not in English (n = 2) or a remaining part of a previous video (n = 1). Two scales, the video information and quality and index and the medical information and content index (MICI) assessed the overall quality, and the medical content of the videos, respectively. There were no videos from hospitals or academic medical centers. Videos in the HR group had a higher median number of views (186,705 vs. 43,796, p < 0.001), more 'likes' (1119 vs. 224, p < 0.001), channel subscriptions (208 vs. 32, p < 0.001), and 'shares' (519 vs. 98, p < 0.001). Multivariable logistic regression showed that only the 'clinical symptoms' component of the MICI scale was associated with a higher likelihood of a video being included in the HR vs. LR group.(OR 1.86, 95 % CI 1.06-3.28, p = 0.03). YouTube videos presenting clinical symptoms of infectious diseases during epidemics are more likely to be included in the HR group and influence viewers behavior.
Objective video presentation QoE predictor for smart adaptive video streaming
NASA Astrophysics Data System (ADS)
Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi
2015-09-01
How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.
Hall, Lenwood W; Killen, William D
2006-01-01
This study was designed to assess trends in physical habitat and benthic communities (macroinvertebrates) annually in two agricultural streams (Del Puerto Creek and Salt Slough) in California's San Joaquin Valley from 2001 to 2005, determine the relationship between benthic communities and both water quality and physical habitat from both streams over the 5-year period, and compare benthic communities and physical habitat in both streams from 2001 to 2005. Physical habitat, measured with 10 metrics and a total score, was reported to be fairly stable over 5 years in Del Puerto Creek but somewhat variable in Salt Slough. Benthic communities, measured with 18 metrics, were reported to be marginally variable over time in Del Puerto Creek but fairly stable in Salt Slough. Rank correlation analysis for both water bodies combined showed that channel alteration, embeddedness, riparian buffer, and velocity/depth/diversity were the most important physical habitat metrics influencing the various benthic metrics. Correlations of water quality parameters and benthic community metrics for both water bodies combined showed that turbidity, dissolved oxygen, and conductivity were the most important water quality parameters influencing the different benthic metrics. A comparison of physical habitat metrics (including total score) for both water bodies over the 5-year period showed that habitat metrics were more positive in Del Puerto Creek when compared to Salt Slough. A comparison of benthic metrics in both water bodies showed that approximately one-third of the metrics were significantly different between the two water bodies. Generally, the more positive benthic metric scores were reported in Del Puerto Creek, which suggests that the communities in this creek are more robust than Salt Slough.
Surgical videos online: a survey of prominent sources and future trends.
Dinscore, Amanda; Andres, Amy
2010-01-01
This article determines the extent of the online availability and quality of surgical videos for the educational benefit of the surgical community. A comprehensive survey was performed that compared a number of online sites providing surgical videos according to their content, production quality, authority, audience, navigability, and other features. Methods for evaluating video content are discussed as well as possible future directions and emerging trends. Surgical videos are a valuable tool for demonstrating and teaching surgical technique and, despite room for growth in this area, advances in streaming video technology have made providing and accessing these resources easier than ever before.
Academic podcasting: quality media delivery.
Tripp, Jacob S; Duvall, Scott L; Cowan, Derek L; Kamauu, Aaron W C
2006-01-01
A video podcast of the CME-approved University of Utah Department of Biomedical Informatics seminar was created in order to address issues with streaming video quality, take advantage of popular web-based syndication methods, and make the files available for convenient, subscription-based download. An RSS feed, which is automatically generated, contains links to the media files and allows viewers to easily subscribe to the weekly seminars in a format that guarantees consistent video quality.
Lee, Yujun; Liu, Xin; Wang, Xiaoming
2015-09-30
The present study examined the relationship between prosody and semantic processing in the written form of modern Chinese by analysing behavioural data and event-related potential data. By manipulating the number of noun syllables in verb-objection, we compare the dynamic neural mechanisms of the structure bisyllabic verb (V2)+monosyllabic noun (N1) (i.e. V2+N1) with V2+N1 (N2, bisyllabic noun). In Chinese, the rhythmic pattern V2+N1 is considered to be a metrical incongruity, whereas V2+N2 is considered to be a metrical congruity. For example, the verb yunshu (to transport) can be followed by liangshi (cereals). However, if yunshu is followed by liang (cereals), yunshu liang is usually considered to be metrically incongruous. This paper shows that (i) V2+N1 elicited more negative amplitudes than V2+N2 in the 90-170 ms and 450-500 ms windows, which indicates that metrical incongruities affect semantic processing in Chinese, and (ii) the acceptance rate for V2+N1 is significantly lower than that of V2+N2, which implies that metrical incongruities disrupt semantic processing in modern Chinese. These results are in agreement with previous studies. This is the first study to find that metrical incongruities disrupt semantic processing in Chinese. This study provides convergent evidence that metrical congruities facilitate semantic processing, whereas metrical incongruities disrupt semantic processing. Video abstract available (Supplemental digital content 1, http://links.lww.com/WNR/A340).
Understanding Acceptance of Software Metrics--A Developer Perspective
ERIC Educational Resources Information Center
Umarji, Medha
2009-01-01
Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…
Semantic Metrics for Analysis of Software
NASA Technical Reports Server (NTRS)
Etzkorn, Letha H.; Cox, Glenn W.; Farrington, Phil; Utley, Dawn R.; Ghalston, Sampson; Stein, Cara
2005-01-01
A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics.
Quality of YouTube TM videos on dental implants.
Abukaraky, A; Hamdan, A-A; Ameera, M-N; Nasief, M; Hassona, Y
2018-07-01
Patients search YouTube for health-care information. To examine what YouTube offers patients seeking information on dental implants, and to evaluate the quality of provided information. A systematic search of YouTube for videos containing information on dental implants was performed using the key words Dental implant and Tooth replacement. Videos were examined by two senior Oral and Maxillofacial Surgery residents who were trained and calibrated to perform the search. Initial assessment was performed to exclude non- English language videos, duplicate videos, conference lectures, and irrelevant videos. Included videos were analyzed with regard to demographics and content's usefulness. Information for patients available from the American Academy of Implant Dentistry, European Association of Osseointegration, and British Society of Restorative Dentistry were used for benchmarking. A total of 117 videos were analyzed. The most commonly discussed topics were related to procedures involved in dental implantology (76.1%, n=89), and to the indications for dental implants (58.1%, n=78). The mean usefulness score of videos was poor (6.02 ±4.7 [range 0-21]), and misleading content was common (30.1% of videos); mainly in topics related to prognosis and maintenance of dental implants. Most videos (83.1%, n=97) failed to mention the source of information presented in the video or where to find more about dental implants. Information about dental implants on YouTube is limited in quality and quantity. YouTube videos can have a potentially important role in modulating patients attitude and treatment decision regarding dental implants.
Towards the XML schema measurement based on mapping between XML and OO domain
NASA Astrophysics Data System (ADS)
Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja
2017-07-01
Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.
Interactional Quality Depicted in Infant and Toddler Videos: Where Are the Interactions?
ERIC Educational Resources Information Center
Fenstermacher, Susan K.; Barr, Rachel; Brey, Elizabeth; Pempek, Tiffany A.; Ryan, Maureen; Calvert, Sandra L.; Shwery, Clay E.; Linebarger, Deborah
2010-01-01
This study examined the social-emotional content and the quality of social interactions depicted in a sample of 58 DVDs marketed towards infants and toddlers. Infant-directed videos rarely used social interactions between caregiver and child or between peers to present content. Even when videos explicitly targeted social-emotional content,…
Is perception of quality more important than technical quality in patient video cases?
Roland, Damian; Matheson, David; Taub, Nick; Coats, Tim; Lakhanpaul, Monica
2015-08-13
The use of video cases to demonstrate key signs and symptoms in patients (patient video cases or PVCs) is a rapidly expanding field. The aims of this study were to evaluate whether the technical quality, or judgement of quality, of a video clip influences a paediatrician's judgment on acuity of the case and assess the relationship between perception of quality and the technical quality of a selection of video clips. Participants (12 senior consultant paediatricians attending an examination workshop) individually categorised 28 PVCs into one of 3 possible acuities and then described the quality of the image seen. The PVCs had been converted into four different technical qualities (differing bit rates ranging from excellent to low quality). Participants' assessment of quality and the actual industry standard of the PVC were independent (333 distinct observations, spearmans rho = 0.0410, p = 0.4564). Agreement between actual acuity and participants' judgement was generally good at higher acuities but moderate at medium/low acuities of illness (overall correlation 0.664). Perception of the quality of the clip was related to correct assignment of acuity regardless of the technical quality of the clip (number of obs = 330, z = 2.07, p = 0.038). It is important to benchmark PVCs prior to use in learning resources as experts may not agree on the information within, or quality of, the clip. It appears, although PVCs may be beneficial in a pedagogical context, the perception of quality of clip may be an important determinant of an expert's decision making.
Subjective Quality Assessment of Underwater Video for Scientific Applications
Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo
2015-01-01
Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions. PMID:26694400
Subjective Quality Assessment of Underwater Video for Scientific Applications.
Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo
2015-12-15
Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions.
Hashimoto, Daniel A; Phitayakorn, Roy; Fernandez-del Castillo, Carlos; Meireles, Ozanan
2016-01-01
The goal of telementoring is to recreate face-to-face encounters with a digital presence. Open-surgery telementoring is limited by lack of surgeon's point-of-view cameras. Google Glass is a wearable computer that looks like a pair of glasses but is equipped with wireless connectivity, a camera, and viewing screen for video conferencing. This study aimed to assess the safety of using Google Glass by assessing the video quality of a telementoring session. Thirty-four (n = 34) surgeons at a single institution were surveyed and blindly compared via video captured with Google Glass versus an Apple iPhone 5 during the open cholecystectomy portion of a Whipple. Surgeons were asked to evaluate the quality of the video and its adequacy for safe use in telementoring. Thirty-four of 107 invited surgical attendings (32%) responded to the anonymous survey. A total of 50% rated the Google Glass video as fair with the other 50% rating it as bad to poor. A total of 52.9% of respondents rated the Apple iPhone video as good. A significantly greater proportion of respondents felt Google Glass video quality was inadequate for telementoring versus the Apple iPhone's (82.4 vs 26.5%, p < 0.0001). Intraclass correlation coefficient was 0.924 (95% CI 0.660-0.999, p < 0.001). While Google Glass provides a great breadth of functionality as a wearable device with two-way communication capabilities, current hardware limitations prevent its use as a telementoring device in surgery as the video quality is inadequate for safe telementoring. As the device is still in initial phases of development, future iterations or competitor devices may provide a better telementoring application for wearable devices.
Quality of Information Approach to Improving Source Selection in Tactical Networks
2017-02-01
consider the performance of this process based on metrics relating to quality of information: accuracy, timeliness, completeness and reliability. These...that are indicators of that the network is meeting these quality requirements. We study effective data rate, social distance, link integrity and the...utility of information as metrics within a multi-genre network to determine the quality of information of its available sources. This paper proposes a
YouTube and food allergy: An appraisal of the educational quality of information.
Reddy, Keerthi; Kearns, Mary; Alvarez-Arango, Santiago; Carrillo-Martin, Ismael; Cuervo-Pardo, Nathaly; Cuervo-Pardo, Lyda; Dimov, Ves; Lang, David M; Lopez-Alvarez, Sonia; Schroer, Brian; Mohan, Kaushik; Dula, Mark; Zheng, Simin; Kozinetz, Claudia; Gonzalez-Estrada, Alexei
2018-06-01
Food allergy affects an estimated 8% of children and 3% of adults in the United States. Food-allergic individuals increasingly use the web for medical information. We sought to determine the educational quality of food allergy YouTube videos. We performed a YouTube search using keywords "food allergy" and "food allergies". The 300 most viewed videos were included and analyzed for characteristics, source, and content. Source was further classified as healthcare provider, alternative medicine provider, patient, company, media, and professional society. A scoring system (FA-DQS) was created to evaluate quality (-10 to +34 points). Negative points were assigned for misleading information. Eight reviewers scored each video independently. Three hundred videos were analyzed, with a median of 6351.50 views, 19 likes, and 1 dislike. More video presenters were female (54.3%). The most common type of video source was alternative medicine provider (26.3%). Alternative treatments included the following: water fast, juicing, Ayurveda, apple cider, yoga, visualization, and sea moss. Controversial diagnostics included kinesiology, IgG testing, and pulse test. Almost half of the videos depicted a non-IgE-mediated reaction (49.0%).Videos by professional societies had the highest FA-DQS (7.27). Scores for videos by professional societies were significantly different from other sources (P < .001). There was a high degree of agreement among reviewers (ICC = 0.820; P < .001). YouTube videos on food allergy frequently recommend controversial diagnostics and commonly depict non-IgE-mediated reactions. There is a need for high-quality, evidence-based, educational videos on food allergy. © 2018 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.
Person re-identification over camera networks using multi-task distance metric learning.
Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng
2014-08-01
Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.
Spread spectrum image watermarking based on perceptual quality metric.
Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi
2011-11-01
Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.
Urch, Ekaterina; Taylor, Samuel A; Cody, Elizabeth; Fabricant, Peter D; Burket, Jayme C; O'Brien, Stephen J; Dines, David M; Dines, Joshua S
2016-10-01
The internet has an increasing role in both patient and physician education. While several recent studies critically appraised the quality and accuracy of web-based written information available to patients, no studies have evaluated such parameters for open-access video content designed for provider use. The primary goal of the study was to determine the accuracy of internet-based instructional videos featuring the shoulder physical examination. An assessment of quality and accuracy of said video content was performed using the basic shoulder examination as a surrogate for the "best-case scenario" due to its widely accepted components that are stable over time. Three search terms ("shoulder," "examination," and "shoulder exam") were entered into the four online video resources most commonly accessed by orthopaedic surgery residents (VuMedi, G9MD, Orthobullets, and YouTube). Videos were captured and independently reviewed by three orthopaedic surgeons. Quality and accuracy were assessed in accordance with previously published standards. Of the 39 video tutorials reviewed, 61% were rated as fair or poor. Specific maneuvers such as the Hawkins test, O'Brien sign, and Neer impingement test were accurately demonstrated in 50, 36, and 27% of videos, respectively. Inter-rater reliability was excellent (mean kappa 0.80, range 0.79-0.81). Our results suggest that information presented in open-access video tutorials featuring the physical examination of the shoulder is inconsistent. Trainee exposure to such potentially inaccurate information may have a significant impact on trainee education.
Bunch, K J; Allin, B; Jolly, M; Hardie, T; Knight, M
2018-05-16
To develop a core metric set to monitor the quality of maternity care. Delphi process followed by a face-to-face consensus meeting. English maternity units. Three representative expert panels: service designers, providers and users. Maternity care metrics judged important by participants. Participants were asked to complete a two-phase Delphi process, scoring metrics from existing local maternity dashboards. A consensus meeting discussed the results and re-scored the metrics. In all, 125 distinct metrics across six domains were identified from existing dashboards. Following the consensus meeting, 14 metrics met the inclusion criteria for the final core set: smoking rate at booking; rate of birth without intervention; caesarean section delivery rate in Robson group 1 women; caesarean section delivery rate in Robson group 2 women; caesarean section delivery rate in Robson group 5 women; third- and fourth-degree tear rate among women delivering vaginally; rate of postpartum haemorrhage of ≥1500 ml; rate of successful vaginal birth after a single previous caesarean section; smoking rate at delivery; proportion of babies born at term with an Apgar score <7 at 5 minutes; proportion of babies born at term admitted to the neonatal intensive care unit; proportion of babies readmitted to hospital at <30 days of age; breastfeeding initiation rate; and breastfeeding rate at 6-8 weeks. Core outcome set methodology can be used to incorporate the views of key stakeholders in developing a core metric set to monitor the quality of care in maternity units, thus enabling improvement. Achieving consensus on core metrics for monitoring the quality of maternity care. © 2018 The Authors. BJOG: An International Journal of Obstetrics and Gynaecology published by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.
Seismic Data Archive Quality Assurance -- Analytics Adding Value at Scale
NASA Astrophysics Data System (ADS)
Casey, R. E.; Ahern, T. K.; Sharer, G.; Templeton, M. E.; Weertman, B.; Keyson, L.
2015-12-01
Since the emergence of real-time delivery of seismic data over the last two decades, solutions for near-real-time quality analysis and station monitoring have been developed by data producers and data stewards. This has allowed for a nearly constant awareness of the quality of the incoming data and the general health of the instrumentation around the time of data capture. Modern quality assurance systems are evolving to provide ready access to a large variety of metrics, a rich and self-correcting history of measurements, and more importantly the ability to access these quality measurements en-masse through a programmatic interface.The MUSTANG project at the IRIS Data Management Center is working to achieve 'total archival data quality', where a large number of standardized metrics, some computationally expensive, are generated and stored for all data from decades past to the near present. To perform this on a 300 TB archive of compressed time series requires considerable resources in network I/O, disk storage, and CPU capacity to achieve scalability, not to mention the technical expertise to develop and maintain it. In addition, staff scientists are necessary to develop the system metrics and employ them to produce comprehensive and timely data quality reports to assist seismic network operators in maintaining their instrumentation. All of these metrics must be available to the scientist 24/7.We will present an overview of the MUSTANG architecture including the development of its standardized metrics code in R. We will show examples of the metrics values that we make publicly available to scientists and educators and show how we are sharing the algorithms used. We will also discuss the development of a capability that will enable scientific researchers to specify data quality constraints on their requests for data, providing only the data that is best suited to their area of study.
Breaking the news on mobile TV: user requirements of a popular mobile content
NASA Astrophysics Data System (ADS)
Knoche, Hendrik O.; Sasse, M. Angela
2006-02-01
This paper presents the results from three lab-based studies that investigated different ways of delivering Mobile TV News by measuring user responses to different encoding bitrates, image resolutions and text quality. All studies were carried out with participants watching News content on mobile devices, with a total of 216 participants rating the acceptability of the viewing experience. Study 1 compared the acceptability of a 15-second video clip at different video and audio encoding bit rates on a 3G phone at a resolution of 176x144 and an iPAQ PDA (240x180). Study 2 measured the acceptability of video quality of full feature news clips of 2.5 minutes which were recorded from broadcast TV, encoded at resolutions ranging from 120x90 to 240x180, and combined with different encoding bit rates and audio qualities presented on an iPAQ. Study 3 improved the legibility of the text included in the video simulating a separate text delivery. The acceptability of News' video quality was greatly reduced at a resolution of 120x90. The legibility of text was a decisive factor in the participants' assessment of the video quality. Resolutions of 168x126 and higher were substantially more acceptable when they were accompanied by optimized high quality text compared to proportionally scaled inline text. When accompanied by high quality text TV news clips were acceptable to the vast majority of participants at resolutions as small as 168x126 for video encoding bitrates of 160kbps and higher. Service designers and operators can apply this knowledge to design a cost-effective mobile TV experience.
Cone-Beam CT with a Flat-Panel Detector: From Image Science to Image-Guided Surgery
Siewerdsen, Jeffrey H.
2011-01-01
The development of large-area flat-panel x-ray detectors (FPDs) has spurred investigation in a spectrum of advanced medical imaging applications, including tomosynthesis and cone-beam CT (CBCT). Recent research has extended image quality metrics and theoretical models to such applications, providing a quantitative foundation for the assessment of imaging performance as well as a general framework for the design, optimization, and translation of such technologies to new applications. For example, cascaded systems models of Fourier domain metrics, such as noise-equivalent quanta (NEQ), have been extended to these modalities to describe the propagation of signal and noise through the image acquisition and reconstruction chain and to quantify the factors that govern spatial resolution, image noise, and detectability. Moreover, such models have demonstrated basic agreement with human observer performance for a broad range of imaging conditions and imaging tasks. These developments in image science have formed a foundation for the knowledgeable development and translation of CBCT to new applications in image-guided interventions - for example, CBCT implemented on a mobile surgical C-arm for intraoperative 3D imaging. The ability to acquire high-quality 3D images on demand during surgical intervention overcomes conventional limitations of surgical guidance in the context of preoperative images alone. A prototype mobile C-arm developed in academic-industry partnership demonstrates CBCT with low radiation dose, sub-mm spatial resolution, and soft-tissue visibility potentially approaching that of diagnostic CT. Integration of the 3D imaging system with real-time tracking, deformable registration, endoscopic video, and 3D visualization offers a promising addition to the surgical arsenal in interventions ranging from head-and-neck / skull base surgery to spine, orthopaedic, thoracic, and abdominal surgeries. Cadaver studies show the potential for significant boosts in surgical performance under CBCT guidance, and early clinical trials demonstrate feasibility, workflow, and image quality within the surgical theatre. PMID:22942510
Comparison of macroinvertebrate-derived stream quality metrics between snag and riffle habitats
Stepenuck, K.F.; Crunkilton, R.L.; Bozek, Michael A.; Wang, L.
2008-01-01
We compared benthic macroinvertebrate assemblage structure at snag and riffle habitats in 43 Wisconsin streams across a range of watershed urbanization using a variety of stream quality metrics. Discriminant analysis indicated that dominant taxa at riffles and snags differed; Hydropsychid caddisflies (Hydropsyche betteni and Cheumatopsyche spp.) and elmid beetles (Optioservus spp. and Stenemlis spp.) typified riffles, whereas isopods (Asellus intermedius) and amphipods (Hyalella azteca and Gammarus pseudolimnaeus) predominated in snags. Analysis of covariance indicated that samples from snag and riffle habitats differed significantly in their response to the urbanization gradient for the Hilsenhoff biotic index (BI), Shannon's diversity index, and percent of filterers, shredders, and pollution intolerant Ephemeroptera, Plecoptera, and Trichoptera (EPT) at each stream site (p ??? 0.10). These differences suggest that although macroinvertebrate assemblages present in either habitat type are sensitive to detecting the effects of urbanization, metrics derived from different habitats should not be intermixed when assessing stream quality through biomonitoring. This can be a limitation to resource managers who wish to compare water quality among streams where the same habitat type is not available at all stream locations, or where a specific habitat type (i.e., a riffle) is required to determine a metric value (i.e., BI). To account for differences in stream quality at sites lacking riffle habitat, snag-derived metric values can be adjusted based on those obtained from riffles that have been exposed to the same level of urbanization. Comparison of nonlinear regression equations that related stream quality metric values from the two habitat types to percent watershed urbanization indicated that snag habitats had on average 30.2 fewer percent EPT individuals, a lower diversity index value than riffles, and a BI value of 0.29 greater than riffles. ?? 2008 American Water Resources Association.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less
Misinformation is prevalent in psoriasis-related YouTube videos.
Qi, J; Trang, T; Doong, J; Kang, S; Chien, A L
2016-11-15
Background Psoriasis patients seek information online, but little is known about their interaction with YouTube. We examined the quality of content in psoriasis-related YouTube videos and investigated their interactions with viewers.Methods YouTube was searched using the term "psoriasis." Relevant videos in English were independently categorized by two reviewers as useful, misleading, or patient view (regarding experience with psoriasis). Disagreements were settled by a third reviewer. Videos were rated on a Global Quality Scale (GQS) (1=poor, 5=excellent).Results According to our reviewers, 17% of the 47 videos were useful, 21% were misleading, and 62% represented patient views. Mean GQS scores were 4.2 ± 1.3 for useful videos, 1.7 ± 0.7 for misleading videos, and 2.2 ± 1.1 for patient view videos (p<0.001). Video views per day did not differ among the categories (p=0.65), whereas useful videos had fewest "Likes" (useful: 31 ± 55, 33 misleading: 151 ± 218, patient views: 165 ± 325, p=0.06) and comments (useful: 9.8 ± 18.3, misleading: 64.1 ± 89.7, 124.9 ± 34 199.4, p=0.009).Conclusions Useful videos were highest in quality but had similar viewership as misleading and patient view videos, with lower popularity and engagement of users compared to other categories. Physicians and psoriasis patients should be aware of this pattern when pproaching YouTube as a resource.
Villafañe, Jorge Hugo; Cantero-Tellez, Raquel; Valdes, Kristin; Usuelli, Federico Giuseppe; Berjano, Pedro
2017-09-01
Conservative treatments are commonly performed therapeutic interventions for the management of carpometacarpal (CMC) joint osteoarthritis (OA). Physical and occupational therapies are starting to use video-based online content as both a patient teaching tool and a source for treatment techniques. YouTube is a popular video-sharing website that can be accessed easily. The purpose of this study was to analyze the quality of content and potential sources of bias in videos available on YouTube pertaining to thumb exercises for CMC OA. The YouTube video database was systematically searched using the search term thumb osteoarthritis and exercises from its inception to March 10, 2017. Authors independently selected videos, conducted quality assessment, and extracted results. A total of 832 videos were found using the keywords. Of these, 10 videos clearly demonstrated therapeutic exercise for the management of CMC OA. In addition, the top-ranked video found by performing a search of "views" was a video with more than 121 863 views uploaded in 2015 that lasted 12.33 minutes and scored only 2 points on the Global Score for Educational Value rating scale. Most of the videos viewed that described conservative interventions for CMC OA management have a low level of evidence to support their use. Although patients and novice hand therapists are using YouTube and other online resources, videos that are produced by expert hand therapists are scarce.
Information Operations & Security
2012-03-05
Fred B. Schneider, Cornell The Promise of Security Metrics • Users: Purchasing decisions – Which system is the better value? • Builders ...Engineering University of Maryland, College Park DISTRIBUTION A: Approved for public release; distribution is unlimited. Digital Multimedia Anti...fingerprints for multimedia content: • Determine the time and place of recordings • Detect tampering in the multimedia content; bind video and
Hamman, William R; Beaubien, Jeffrey M; Beaudin-Seiler, Beth M
2009-12-01
The aims of this research are to begin to understand health care teams in their operational environment, establish metrics of performance for these teams, and validate a series of scenarios in simulation that elicit team and technical skills. The focus is on defining the team model that will function in the operational environment in which health care professionals work. Simulations were performed across the United States in 70- to 1000-bed hospitals. Multidisciplinary health care teams analyzed more than 300 hours of videos of health care professionals performing simulations of team-based medical care in several different disciplines. Raters were trained to enhance inter-rater reliability. The study validated event sets that trigger team dynamics and established metrics for team-based care. Team skills were identified and modified using simulation scenarios that employed the event-set-design process. Specific skills (technical and team) were identified by criticality measurement and task analysis methodology. In situ simulation, which includes a purposeful and Socratic Method of debriefing, is a powerful intervention that can overcome inertia found in clinician behavior and latent environmental systems that present a challenge to quality and patient safety. In situ simulation can increase awareness of risks, personalize the risks, and encourage the reflection, effort, and attention needed to make changes to both behaviors and to systems.
State of the art metrics for aspect oriented programming
NASA Astrophysics Data System (ADS)
Ghareb, Mazen Ismaeel; Allen, Gary
2018-04-01
The quality evaluation of software, e.g., defect measurement, gains significance with higher use of software applications. Metric measurements are considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality indicators for novel development approaches such as Aspect Oriented Programming (AOP). AOP intends to enhance programming quality, by providing new and novel constructs for the development of systems, for example, point cuts, advice and inter-type relationships. Hence, it is not evident if quality pointers for AOP can be derived from direct expansions of traditional OO measurements. Then again, investigations of AOP do regularly depend on established coupling measurements. Notwithstanding the late reception of AOP in empirical studies, coupling measurements have been adopted as useful markers of flaw inclination in this context. In this paper we will investigate the state of the art metrics for measurement of Aspect Oriented systems development.
NASA Astrophysics Data System (ADS)
Yang, Zhongming; Dou, Jiantai; Du, Jinyu; Gao, Zhishan
2018-03-01
Non-null interferometry could use to measure the radius of curvature (ROC), we have presented a virtual quadratic Newton rings phase-shifting moiré-fringes measurement method for large ROC measurement (Yang et al., 2016). In this paper, we propose a large ROC measurement method based on the evaluation of the interferogram-quality metric by the non-null interferometer. With the multi-configuration model of the non-null interferometric system in ZEMAX, the retrace errors and the phase introduced by the test surface are reconstructed. The interferogram-quality metric is obtained by the normalized phase-shifted testing Newton rings with the spherical surface model in the non-null interferometric system. The radius curvature of the test spherical surface can be obtained until the minimum of the interferogram-quality metric is found. Simulations and experimental results are verified the feasibility of our proposed method. For a spherical mirror with a ROC of 41,400 mm, the measurement accuracy is better than 0.13%.
Wiele, Stephen M.; Brasher, Anne M.D.; Miller, Matthew P.; May, Jason T.; Carpenter, Kurt D.
2012-01-01
The U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program was established by Congress in 1991 to collect long-term, nationally consistent information on the quality of the Nation's streams and groundwater. The NAWQA Program utilizes interdisciplinary and dynamic studies that link the chemical and physical conditions of streams (such as flow and habitat) with ecosystem health and the biologic condition of algae, aquatic invertebrates, and fish communities. This report presents metrics derived from NAWQA data and the U.S. Geological Survey streamgaging network for sampling sites in the Western United States, as well as associated chemical, habitat, and streamflow properties. The metrics characterize the conditions of algae, aquatic invertebrates, and fish. In addition, we have compiled climate records and basin characteristics related to the NAWQA sampling sites. The calculated metrics and compiled data can be used to analyze ecohydrologic trends over time.
Hudali, Tamer; Papireddy, Muralidhar; Bhattarai, Mukul; Deckard, Alan; Hingle, Susan
2017-01-10
Hospital medicine is a relatively new specialty field, dedicated to the delivery of comprehensive medical care to hospitalized patients. YouTube is one of the most frequently used websites, offering access to a gamut of videos from self-produced to professionally made. The aim of our study was to determine the adequacy of YouTube as an effective means to define and depict the role of hospitalists. YouTube was searched on November 17, 2014, using the following search words: "hospitalist," "hospitalist definition," "what is the role of a hospitalist," "define hospitalist," and "who is a hospitalist." Videos found only in the first 10 pages of each search were included. Non-English, noneducational, and nonrelevant videos were excluded. A novel 7-point scoring tool was created by the authors based on the definition of a hospitalist adopted by the Society of Hospital Medicine. Three independent reviewers evaluated, scored, and classified the videos into high, intermediate, and low quality based on the average score. A total of 102 videos out of 855 were identified as relevant and included in the analysis. Videos uploaded by academic institutions had the highest mean score. Only 6 videos were classified as high quality, 53 as intermediate quality, and 42 as low quality, with 82.4% (84/102) of the videos scoring an average of 4 or less. Most videos found in the search of a hospitalist definition are inadequate. Leading medical organizations and academic institutions should consider producing and uploading quality videos to YouTube to help patients and their families better understand the roles and definition of the hospitalist. ©Tamer Hudali, Muralidhar Papireddy, Mukul Bhattarai, Alan Deckard, Susan Hingle. Originally published in the Interactive Journal of Medical Research (http://www.i-jmr.org/), 10.01.2017.
Day, Suzanne; Mason, Robin; Tannenbaum, Cara; Rochon, Paula A
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all.
Mason, Robin; Tannenbaum, Cara; Rochon, Paula A.
2017-01-01
Integrating sex and gender in health research is essential to produce the best possible evidence to inform health care. Comprehensive integration of sex and gender requires considering these variables from the very beginning of the research process, starting at the proposal stage. To promote excellence in sex and gender integration, we have developed a set of metrics to assess the quality of sex and gender integration in research proposals. These metrics are designed to assist both researchers in developing proposals and reviewers in making funding decisions. We developed this tool through an iterative three-stage method involving 1) review of existing sex and gender integration resources and initial metrics design, 2) expert review and feedback via anonymous online survey (Likert scale and open-ended questions), and 3) analysis of feedback data and collective revision of the metrics. We received feedback on the initial metrics draft from 20 reviewers with expertise in conducting sex- and/or gender-based health research. The majority of reviewers responded positively to questions regarding the utility, clarity and completeness of the metrics, and all reviewers provided responses to open-ended questions about suggestions for improvements. Coding and analysis of responses identified three domains for improvement: clarifying terminology, refining content, and broadening applicability. Based on this analysis we revised the metrics into the Essential Metrics for Assessing Sex and Gender Integration in Health Research Proposals Involving Human Participants, which outlines criteria for excellence within each proposal component and provides illustrative examples to support implementation. By enhancing the quality of sex and gender integration in proposals, the metrics will help to foster comprehensive, meaningful integration of sex and gender throughout each stage of the research process, resulting in better quality evidence to inform health care for all. PMID:28854192
The Educational Efficacy of Distinct Information Delivery Systems in Modified Video Games
ERIC Educational Resources Information Center
Moshirnia, Andrew; Israel, Maya
2010-01-01
Despite the increasing popularity of many commercial video games, this popularity is not shared by educational video games. Modified video games, however, can bridge the gap in quality between commercial and education video games by embedding educational content into popular commercial video games. This study examined how different information…
Multivariate Analyses of Quality Metrics for Crystal Structures in the PDB Archive.
Shao, Chenghua; Yang, Huanwang; Westbrook, John D; Young, Jasmine Y; Zardecki, Christine; Burley, Stephen K
2017-03-07
Following deployment of an augmented validation system by the Worldwide Protein Data Bank (wwPDB) partnership, the quality of crystal structures entering the PDB has improved. Of significance are improvements in quality measures now prominently displayed in the wwPDB validation report. Comparisons of PDB depositions made before and after introduction of the new reporting system show improvements in quality measures relating to pairwise atom-atom clashes, side-chain torsion angle rotamers, and local agreement between the atomic coordinate structure model and experimental electron density data. These improvements are largely independent of resolution limit and sample molecular weight. No significant improvement in the quality of associated ligands was observed. Principal component analysis revealed that structure quality could be summarized with three measures (Rfree, real-space R factor Z score, and a combined molecular geometry quality metric), which can in turn be reduced to a single overall quality metric readily interpretable by all PDB archive users. Copyright © 2017 Elsevier Ltd. All rights reserved.
A laser beam quality definition based on induced temperature rise.
Miller, Harold C
2012-12-17
Laser beam quality metrics like M(2) can be used to describe the spot sizes and propagation behavior of a wide variety of non-ideal laser beams. However, for beams that have been diffracted by limiting apertures in the near-field, or those with unusual near-field profiles, the conventional metrics can lead to an inconsistent or incomplete description of far-field performance. This paper motivates an alternative laser beam quality definition that can be used with any beam. The approach uses a consideration of the intrinsic ability of a laser beam profile to heat a material. Comparisons are made with conventional beam quality metrics. An analysis on an asymmetric Gaussian beam is used to establish a connection with the invariant beam propagation ratio.
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
2006-10-01
Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.
Objective Video Quality Assessment Based on Machine Learning for Underwater Scientific Applications
Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Otero, Pablo
2017-01-01
Video services are meant to be a fundamental tool in the development of oceanic research. The current technology for underwater networks (UWNs) imposes strong constraints in the transmission capacity since only a severely limited bitrate is available. However, previous studies have shown that the quality of experience (QoE) is enough for ocean scientists to consider the service useful, although the perceived quality can change significantly for small ranges of variation of video parameters. In this context, objective video quality assessment (VQA) methods become essential in network planning and real time quality adaptation fields. This paper presents two specialized models for objective VQA, designed to match the special requirements of UWNs. The models are built upon machine learning techniques and trained with actual user data gathered from subjective tests. Our performance analysis shows how both of them can successfully estimate quality as a mean opinion score (MOS) value and, for the second model, even compute a distribution function for user scores. PMID:28333123
Quantitative metrics for assessment of chemical image quality and spatial resolution
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
2016-02-28
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
Quantitative metrics for assessment of chemical image quality and spatial resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
13 point video tape quality guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaunt, R.
1997-05-01
Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to viewmore » how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.« less
Porter, Stephen D.
2008-01-01
Algae are excellent indicators of water-quality conditions, notably nutrient and organic enrichment, and also are indicators of major ion, dissolved oxygen, and pH concentrations and stream microhabitat conditions. The autecology, or physiological optima and tolerance, of algal species for various water-quality contaminants and conditions is relatively well understood for certain groups of freshwater algae, notably diatoms. However, applications of autecological information for water-quality assessments have been limited because of challenges associated with compiling autecological literature from disparate sources, tracking name changes for a large number of algal species, and creating an autecological data base from which algal-indicator metrics can be calculated. A comprehensive summary of algal autecological attributes for North American streams and rivers does not exist. This report describes a large, digital data file containing 28,182 records for 5,939 algal taxa, generally species or variety, collected by the U.S. Geological Survey?s National Water-Quality Assessment (NAWQA) Program. The data file includes 37 algal attributes classified by over 100 algal-indicator codes or metrics that can be calculated easily with readily available software. Algal attributes include qualitative classifications based on European and North American autecological literature, and semi-quantitative, weighted-average regression approaches for estimating optima using regional and national NAWQA data. Applications of algal metrics in water-quality assessments are discussed and national quartile distributions of metric scores are shown for selected indicator metrics.
The study of surgical image quality evaluation system by subjective quality factor method
NASA Astrophysics Data System (ADS)
Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard
2016-03-01
GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.
Region-of-interest determination and bit-rate conversion for H.264 video transcoding
NASA Astrophysics Data System (ADS)
Huang, Shu-Fen; Chen, Mei-Juan; Tai, Kuang-Han; Li, Mian-Shiuan
2013-12-01
This paper presents a video bit-rate transcoder for baseline profile in H.264/AVC standard to fit the available channel bandwidth for the client when transmitting video bit-streams via communication channels. To maintain visual quality for low bit-rate video efficiently, this study analyzes the decoded information in the transcoder and proposes a Bayesian theorem-based region-of-interest (ROI) determination algorithm. In addition, a curve fitting scheme is employed to find the models of video bit-rate conversion. The transcoded video will conform to the target bit-rate by re-quantization according to our proposed models. After integrating the ROI detection method and the bit-rate transcoding models, the ROI-based transcoder allocates more coding bits to ROI regions and reduces the complexity of the re-encoding procedure for non-ROI regions. Hence, it not only keeps the coding quality but improves the efficiency of the video transcoding for low target bit-rates and makes the real-time transcoding more practical. Experimental results show that the proposed framework gets significantly better visual quality.
Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N
2007-06-01
We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.
Cardiac ultrasonography over 4G wireless networks using a tele-operated robot
Panayides, Andreas S.; Jossif, Antonis P.; Christoforou, Eftychios G.; Vieyres, Pierre; Novales, Cyril; Voskarides, Sotos; Pattichis, Constantinos S.
2016-01-01
This Letter proposes an end-to-end mobile tele-echography platform using a portable robot for remote cardiac ultrasonography. Performance evaluation investigates the capacity of long-term evolution (LTE) wireless networks to facilitate responsive robot tele-manipulation and real-time ultrasound video streaming that qualifies for clinical practice. Within this context, a thorough video coding standards comparison for cardiac ultrasound applications is performed, using a data set of ten ultrasound videos. Both objective and subjective (clinical) video quality assessment demonstrate that H.264/AVC and high efficiency video coding standards can achieve diagnostically-lossless video quality at bitrates well within the LTE supported data rates. Most importantly, reduced latencies experienced throughout the live tele-echography sessions allow the medical expert to remotely operate the robot in a responsive manner, using the wirelessly communicated cardiac ultrasound video to reach a diagnosis. Based on preliminary results documented in this Letter, the proposed robotised tele-echography platform can provide for reliable, remote diagnosis, achieving comparable quality of experience levels with in-hospital ultrasound examinations. PMID:27733929
Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A
2017-07-01
Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p < 0.002) whether procedures were performed before training versus after training. Study methodology by Medical Education Research Study Quality Instrument criteria scored 15.5/19, Quality Assessment of Diagnostic Accuracy Studies 2 showed low bias risk. Video evaluations of AA, FA, and FAS procedures with IPS are unbiased, valid, and have potential for formative assessments of competency. Prognostic study, level II.
Popular on YouTube: a critical appraisal of the educational quality of information regarding asthma.
Gonzalez-Estrada, Alexei; Cuervo-Pardo, Lyda; Ghosh, Bitan; Smith, Martin; Pazheri, Foussena; Zell, Katrina; Wang, Xiao-Feng; Lang, David M
2015-01-01
Asthma affects >300 million people globally, including 25 million in the United States. Patients with asthma frequently use the Internet as a source of information. YouTube is one of the three most popular Web sites. To determine the educational quality of YouTube videos for asthma. We performed a YouTube search by using the keyword "asthma." The 200 most frequently viewed relevant videos were included in the study. Asthma videos were analyzed for characteristics, source, and content. Source was further classified as asthma health care provider, other health care provider, patient, pharmaceutical company, and professional society and/or media. A scoring system was created to evaluate quality (-10 to 30 points). Negative points were assigned for misleading information. Two hundred videos were analyzed, with a median of 18,073.5 views, 31.5 likes, and 2 dislikes, which spanned a median of 172 seconds. More video presenters were male (60.5%). The most common type of video source was other health care providers (34.5%). The most common video content was alternative treatments (38.0%), including live-fish ingestion; reflexology; acupressure and/or acupuncture; Ayurveda; yoga; raw food, vegan, gluten-free diets; marijuana; Buteyko breathing; and salt therapy. Scores for videos supplied by asthma health care providers were statistically significantly different from other sources (p < 0.001) and had the highest average score (9.91). YouTube videos of asthma were frequently viewed but were a poor source of accurate health care information. Videos by asthma health care providers were rated highest in quality. The allergy/immunology community has a clear opportunity to enhance the value of educational material on YouTube.
Khandelwal, Aditi; Devine, Luke A; Otremba, Mirek
2017-07-01
Many instructional materials for point-of-care ultrasound (US)-guided procedures exist; however, their quality is unknown. This study assessed widely available educational videos for point-of-care US-guided procedures relevant to internal medicine: central venous catheterization, thoracentesis, and paracentesis. We searched Ovid MEDLINE, YouTube, and Google to identify videos for point-of-care US-guided paracentesis, thoracentesis, and central venous catheterization. Videos were evaluated with a 5-point scale assessing the global educational value and a checklist based on consensus guidelines for competencies in point-of-care US-guided procedures. For point-of-care US-guided central venous catheterization, 12 videos were found, with an average global educational value score ± SD of 4.5 ± 0.7. Indications to abort the procedure were discussed in only 3 videos. Five videos described the indications and contraindications for performing central venous catheterization. For point-of-care US-guided thoracentesis, 8 videos were identified, with an average global educational value score of 4.0 ± 0.9. Only one video discussed indications to abort the procedure, and 3 videos discussed sterile technique. For point-of-care US-guided paracentesis, 7 videos were included, with an average global educational value score of 4.1 ± 0.9. Only 1 video discussed indications to abort the procedure, and 2 described the location of the inferior epigastric artery. The 27 videos reviewed contained good-quality general instruction. However, we noted a lack of safety-related information in most of the available videos. Further development of resources is required to teach internal medicine trainees skills that focus on the safety of point-of-care US guidance. © 2017 by the American Institute of Ultrasound in Medicine.
Real-time video streaming in mobile cloud over heterogeneous wireless networks
NASA Astrophysics Data System (ADS)
Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos
2012-06-01
Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.
Home Telehealth Video Conferencing: Perceptions and Performance
Morris, Greg; Pech, Joanne; Rechter, Stuart; Carati, Colin; Kidd, Michael R
2015-01-01
Background The Flinders Telehealth in the Home trial (FTH trial), conducted in South Australia, was an action research initiative to test and evaluate the inclusion of telehealth services and broadband access technologies for palliative care patients living in the community and home-based rehabilitation services for the elderly at home. Telehealth services at home were supported by video conferencing between a therapist, nurse or doctor, and a patient using the iPad tablet. Objective The aims of this study are to identify which technical factors influence the quality of video conferencing in the home setting and to assess the impact of these factors on the clinical perceptions and acceptance of video conferencing for health care delivery into the home. Finally, we aim to identify any relationships between technical factors and clinical acceptance of this technology. Methods An action research process developed several quantitative and qualitative procedures during the FTH trial to investigate technology performance and users perceptions of the technology including measurements of signal power, data transmission throughput, objective assessment of user perceptions of videoconference quality, and questionnaires administered to clinical users. Results The effectiveness of telehealth was judged by clinicians as equivalent to or better than a home visit on 192 (71.6%, 192/268) occasions, and clinicians rated the experience of conducting a telehealth session compared with a home visit as equivalent or better in 90.3% (489/540) of the sessions. It was found that the quality of video conferencing when using a third generation mobile data service (3G) in comparison to broadband fiber-based services was concerning as 23.5% (220/936) of the calls failed during the telehealth sessions. The experimental field tests indicated that video conferencing audio and video quality was worse when using mobile data services compared with fiber to the home services. As well, statistically significant associations were found between audio/video quality and patient comfort with the technology as well as the clinician ratings for effectiveness of telehealth. Conclusions These results showed that the quality of video conferencing when using 3G-based mobile data services instead of broadband fiber-based services was less due to failed calls, audio/ video jitter, and video pixilation during the telehealth sessions. Nevertheless, clinicians felt able to deliver effective services to patients at home using 3G-based mobile data services. PMID:26381104
Roberts, James J.; Bruce, James F.; Zuellig, Robert E.
2018-01-08
The analysis described in this report is part of a longterm project monitoring the biological communities, habitat, and water quality of the Fountain Creek Basin. Biology, habitat, and water-quality data have been collected at 10 sites since 2003. These data include annual samples of aquatic invertebrate communities, fish communities, water quality, and quantitative riverine habitat. This report examines trends in biological communities from 2003 to 2016 and explores relationships between biological communities and abiotic variables (antecedent streamflow, physical habitat, and water quality). Six biological metrics (three invertebrate and three fish) and four individual fish species were used to examine trends in these data and how streamflow, habitat, and (or) water quality may explain these trends. The analysis of 79 trends shows that the majority of significant trends decreased over the trend period. Overall, 19 trends before adjustments for streamflow in the fish (12) and invertebrate (7) metrics were all decreasing except for the metric Invertebrate Species Richness at the most upstream site in Monument Creek. Seven of these trends were explained by streamflow and four trends were revealed that were originally masked by variability in antecedent streamflow. Only two sites (Jimmy Camp Creek at Fountain, CO and Fountain Creek near Pinon, CO) had no trends in the fish or invertebrate metrics. Ten of the streamflow-adjusted trends were explained by habitat, one was explained by water quality, and five were not explained by any of the variables that were tested. Overall, from 2003 to 2016, all the fish metric trends were decreasing with an average decline of 40 percent, and invertebrate metrics decreased on average by 9.5 percent. A potential peak streamflow threshold was identified above which there is severely limited production of age-0 flathead chub (Platygobio gracilis).
SU-E-T-776: Use of Quality Metrics for a New Hypo-Fractionated Pre-Surgical Mesothelioma Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, S; Mehta, V
Purpose: The “SMART” (Surgery for Mesothelioma After Radiation Therapy) approach involves hypo-fractionated radiotherapy of the lung pleura to 25Gy over 5 days followed by surgical resection within 7. Early clinical results suggest that this approach is very promising, but also logistically challenging due to the multidisciplinary involvement. Due to the compressed schedule, high dose, and shortened planning time, the delivery of the planned doses were monitored for safety with quality metric software. Methods: Hypo-fractionated IMRT treatment plans were developed for all patients and exported to Quality Reports™ software. Plan quality metrics or PQMs™ were created to calculate an objective scoringmore » function for each plan. This allows for an objective assessment of the quality of the plan and a benchmark for plan improvement for subsequent patients. The priorities of various components were incorporated based on similar hypo-fractionated protocols such as lung SBRT treatments. Results: Five patients have been treated at our institution using this approach. The plans were developed, QA performed, and ready within 5 days of simulation. Plan Quality metrics utilized in scoring included doses to OAR and target coverage. All patients tolerated treatment well and proceeded to surgery as scheduled. Reported toxicity included grade 1 nausea (n=1), grade 1 esophagitis (n=1), grade 2 fatigue (n=3). One patient had recurrent fluid accumulation following surgery. No patients experienced any pulmonary toxicity prior to surgery. Conclusion: An accelerated course of pre-operative high dose radiation for mesothelioma is an innovative and promising new protocol. Without historical data, one must proceed cautiously and monitor the data carefully. The development of quality metrics and scoring functions for these treatments allows us to benchmark our plans and monitor improvement. If subsequent toxicities occur, these will be easy to investigate and incorporate into the metrics. This will improve the safe delivery of large doses for these patients.« less
YouTube as a source of information for obstructive sleep apnea.
Singh, Sameer K; Liu, Stanley; Capasso, Robson; Kern, Robert C; Gouveia, Christopher J
Assess the quality of information on obstructive sleep apnea (OSA) presented on YouTube for patients. "Obstructive sleep apnea" was entered into the YouTube search. Two independent reviewers categorized and analyzed videos utilizing a customized scoring-system along with search position, likes, and views. Forty-eight videos were analyzed. Most were educational (52.1%). Educational and news videos had significantly higher scores, but had no significant differences in search position, likes/day, or views/day. Most videos mentioned positive airway pressure (65%), and nearly half (44%) mentioned mandibular devices in the management of OSA. Few videos discussed surgery (13%) or otolaryngology (15%). YouTube is a promising source of information for OSA patients. Educational and news videos are of highest quality. General quality measures like search position, views, and likes are not correlated with formally scored value. Sleep surgery and otolaryngologists are minimally mentioned, representing an opportunity for improvement. Copyright © 2018 Elsevier Inc. All rights reserved.
Nápoles, Anna M.; Santoyo-Olsson, Jasmine; Karliner, Leah S.; O’Brien, Helen; Gregorich, Steven E.; Pérez-Stable, Eliseo J.
2013-01-01
Language interpretation ameliorates health disparities among underserved limited English-proficient patients, yet few studies have compared clinician satisfaction with these services. Self-administered clinician post-visit surveys compared the quality of interpretation and communication, visit satisfaction, degree of patient engagement, and cultural competence of visits using untrained people acting as interpreters (ad hoc), in-person professional, or video conferencing professional interpretation for 283 visits. Adjusting for clinician and patient characteristics, the quality of interpretation of in-person and video conferencing modes were rated similarly (OR=1.79; 95% CI 0.74, 4.33). The quality of in-person (OR=5.55; 95% CI 1.50, 20.51) and video conferencing (OR=3.10; 95% CI 1.16, 8.31) were rated higher than ad hoc interpretation. Self-assessed cultural competence was better for in-person versus video conferencing interpretation (OR=2.32; 95% CI 1.11, 4.86). Video conferencing interpretation increases access without compromising quality, but cultural nuances may be better addressed by in-person interpreters. Professional interpretation is superior to ad hoc (OR=4.15; 95% CI 1.43, 12.09). PMID:20173271
From image captioning to video summary using deep recurrent networks and unsupervised segmentation
NASA Astrophysics Data System (ADS)
Morosanu, Bogdan-Andrei; Lemnaru, Camelia
2018-04-01
Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network
Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-01-01
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838
Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.
Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-04-13
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.
Angermeier, P.L.; Davideanu, G.
2004-01-01
Multimetric biotic indices increasingly are used to complement physicochemical data in assessments of stream quality. We initiated development of multimetric indices, based on fish communities, to assess biotic integrity of streams in two physiographic regions of central Romania. Unlike previous efforts to develop such indices for European streams, our metrics and scoring criteria were selected largely on the basis of empirical relations in the regions of interest. We categorised 54 fish species with respect to ten natural-history attributes, then used this information to compute 32 candidate metrics of five types (taxonomic, tolerance, abundance, reproductive, and feeding) for each of 35 sites. We assessed the utility of candidate metrics for detecting anthropogenic impact based on three criteria: (a) range of values taken, (b) relation to a site-quality index (SQI), which incorporated information on hydrologic alteration, channel alteration, land-use intensity, and water chemistry, and (c) metric redundancy. We chose seven metrics from each region to include in preliminary multimetric indices (PMIs). Both PMIs included taxonomic, tolerance, and feeding metrics, but only two metrics were common to both PMIs. Although we could not validate our PMIs, their strong association with the SQI in each region suggests that such indices would be valuable tools for assessing stream quality and could provide more comprehensive assessments than the traditional approaches based solely on water chemistry.
Initial Ada components evaluation
NASA Technical Reports Server (NTRS)
Moebes, Travis
1989-01-01
The SAIC has the responsibility for independent test and validation of the SSE. They have been using a mathematical functions library package implemented in Ada to test the SSE IV and V process. The library package consists of elementary mathematical functions and is both machine and accuracy independent. The SSE Ada components evaluation includes code complexity metrics based on Halstead's software science metrics and McCabe's measure of cyclomatic complexity. Halstead's metrics are based on the number of operators and operands on a logical unit of code and are compiled from the number of distinct operators, distinct operands, and total number of occurrences of operators and operands. These metrics give an indication of the physical size of a program in terms of operators and operands and are used diagnostically to point to potential problems. McCabe's Cyclomatic Complexity Metrics (CCM) are compiled from flow charts transformed to equivalent directed graphs. The CCM is a measure of the total number of linearly independent paths through the code's control structure. These metrics were computed for the Ada mathematical functions library using Software Automated Verification and Validation (SAVVAS), the SSE IV and V tool. A table with selected results was shown, indicating that most of these routines are of good quality. Thresholds for the Halstead measures indicate poor quality if the length metric exceeds 260 or difficulty is greater than 190. The McCabe CCM indicated a high quality of software products.
Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka
2016-03-01
This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).
A time-varying subjective quality model for mobile streaming videos with stalling events
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.
2015-09-01
Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.
NASA Astrophysics Data System (ADS)
Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency
2014-01-01
During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).
EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY
This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...
The principal focus of this project is the mapping and interpretation of landscape scale (i.e., broad scale) ecological metrics among contributing watersheds of the Upper White River, and the development of geospatial models of water quality vulnerability for several suspected no...
Content-based management service for medical videos.
Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre
2013-01-01
Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.
Jensen, Katrine; Bjerrum, Flemming; Hansen, Henrik Jessen; Petersen, René Horsleben; Pedersen, Jesper Holst; Konge, Lars
2015-10-01
The aims of this study were to develop virtual reality simulation software for video-assisted thoracic surgery (VATS) lobectomy, to explore the opinions of thoracic surgeons concerning the VATS lobectomy simulator and to test the validity of the simulator metrics. Experienced VATS surgeons worked with computer specialists to develop a VATS lobectomy software for a virtual reality simulator. Thoracic surgeons with different degrees of experience in VATS were enrolled at the 22nd meeting of the European Society of Thoracic Surgeons (ESTS) held in Copenhagen in June 2014. The surgeons were divided according to the number of performed VATS lobectomies: novices (0 VATS lobectomies), intermediates (1-49 VATS lobectomies) and experienced (>50 VATS lobectomies). The participants all performed a lobectomy of a right upper lobe on the simulator and answered a questionnaire regarding content validity. Metrics were compared between the three groups. We succeeded in developing the first version of a virtual reality VATS lobectomy simulator. A total of 103 thoracic surgeons completed the simulated lobectomy and were distributed as follows: novices n = 32, intermediates n = 45 and experienced n = 26. All groups rated the overall user realism of the VATS lobectomy scenario to a median of 5 on a scale 1-7, with 7 being the best score. The experienced surgeons found the graphics and movements realistic and rated the scenario high in terms of usefulness as a training tool for novice and intermediate experienced thoracic surgeons, but not very useful as a training tool for experienced surgeons. The metric scores were not statistically significant between groups. This is the first study to describe a commercially available virtual reality simulator for a VATS lobectomy. More than 100 thoracic surgeons found the simulator realistic, and hence it showed good content validity. However, none of the built-in simulator metrics could significantly distinguish between novice, intermediate experienced and experienced surgeons, and further development of the simulator software is necessary to develop valid metrics. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Engagement with electronic screen media among students with autism spectrum disorders.
Mineo, Beth A; Ziegler, William; Gill, Susan; Salkin, Donna
2009-01-01
This study investigated the relative engagement potential of four types of electronic screen media (ESM): animated video, video of self, video of a familiar person engaged with an immersive virtual reality (VR) game, and immersion of self in the VR game. Forty-two students with autism, varying in age and expressive communication ability, were randomly assigned to the experimental conditions. Gaze duration and vocalization served as dependent measures of engagement. The results reveal differential responding across ESM, with some variation related to the engagement metric employed. Preferences for seeing themselves on the screen, as well as for viewing the VR scenarios, emerged from the data. While the study did not yield definitive data about the relative engagement potential of ESM alternatives, it does provide a foundation for future research, including guidance related to participant profiles, stimulus characteristics, and data coding challenges.
An Opportunistic Routing Mechanism Combined with Long-Term and Short-Term Metrics for WMN
Piao, Xianglan; Qiu, Tie
2014-01-01
WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX. PMID:25250379
An opportunistic routing mechanism combined with long-term and short-term metrics for WMN.
Sun, Weifeng; Wang, Haotian; Piao, Xianglan; Qiu, Tie
2014-01-01
WMN (wireless mesh network) is a useful wireless multihop network with tremendous research value. The routing strategy decides the performance of network and the quality of transmission. A good routing algorithm will use the whole bandwidth of network and assure the quality of service of traffic. Since the routing metric ETX (expected transmission count) does not assure good quality of wireless links, to improve the routing performance, an opportunistic routing mechanism combined with long-term and short-term metrics for WMN based on OLSR (optimized link state routing) and ETX is proposed in this paper. This mechanism always chooses the highest throughput links to improve the performance of routing over WMN and then reduces the energy consumption of mesh routers. The simulations and analyses show that the opportunistic routing mechanism is better than the mechanism with the metric of ETX.
New Integrated Video and Graphics Technology: Digital Video Interactive.
ERIC Educational Resources Information Center
Optical Information Systems, 1987
1987-01-01
Describes digital video interactive (DVI), a new technology which combines the interactivity of the graphics capabilities in personal computers with the realism of high-quality motion video and multitrack audio in an all-digital integrated system. (MES)
YouTube videos as a teaching tool and patient resource for infantile spasms.
Fat, Mary Jane Lim; Doja, Asif; Barrowman, Nick; Sell, Erick
2011-07-01
The purpose of this study was to assess YouTube videos for their efficacy as a patient resource for infantile spasms. Videos were searched using the terms infantile spasm, spasm, epileptic spasm, and West syndrome. The top 25 videos under each term were selected according to set criteria. Technical quality, diagnosis of infantile spasms, and suitability as a teaching resource were assessed by 2 neurologists using the Medical Video Rating Scale. There were 5858 videos found. Of the 100 top videos, 46% did not meet selection criteria. Mean rating for technical quality was 4.0 of 5 for rater 1 and 3.9 of 5 for rater 2. Raters found 60% and 64% of videos to accurately portray infantile spasms, respectively, with significant agreement (Cohen κ coefficient = 0.75, P < .001). Ten videos were considered excellent examples (grading of 5 of 5) by at least 1 rater. YouTube may be used as an excellent patient resource for infantile spasms if guided search practices are followed.
YouTube as a Source of Information on Cervical Cancer.
Adhikari, Janak; Sharma, Priyadarshani; Arjyal, Lubina; Uprety, Dipesh
2016-04-01
Cervical cancer is the third most common cancer worldwide. Accurate information about cervical cancer to general public can lower the burden of the disease including its mortality. We aimed to look at the quality of information available in YouTube for cervical cancer. We searched YouTube (http://www.youtube.com) for videos using the keyword Cervical cancer on November 12, 2015. Videos were then analyzed for their source and content of information. We studied 172 videos using the keyword Cervical cancer on November 12, 2015. We found that there were videos describing the personal stories, risk factors, and the importance of screening. However, videos discussing all the aspects of cancers were lacking. Likewise, videos from the reputed organization were also lacking. Although there were numerous videos available in cervical cancer, videos from reputed organizations including Center for Disease Control and Prevention, American Cancer Society, and World Health Organization were lacking. We strongly believe that quality videos from such organizations via YouTube can help lower the burden of disease.
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
Metrication report to the Congress. 1991 activities and 1992 plans
NASA Technical Reports Server (NTRS)
1991-01-01
During 1991, NASA approved a revised metric use policy and developed a NASA Metric Transition Plan. This Plan targets the end of 1995 for completion of NASA's metric initiatives. This Plan also identifies future programs that NASA anticipates will use the metric system of measurement. Field installations began metric transition studies in 1991 and will complete them in 1992. Half of NASA's Space Shuttle payloads for 1991, and almost all such payloads for 1992, have some metric-based elements. In 1992, NASA will begin assessing requirements for space-quality piece parts fabricated to U.S. metric standards, leading to development and qualification of high priority parts.
Web-video-mining-supported workflow modeling for laparoscopic surgeries.
Liu, Rui; Zhang, Xiaoli; Zhang, Hao
2016-11-01
As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Improvement of impact noise in a passenger car utilizing sound metric based on wavelet transform
NASA Astrophysics Data System (ADS)
Lee, Sang-Kwon; Kim, Ho-Wuk; Na, Eun-Woo
2010-08-01
A new sound metric for impact sound is developed based on the continuous wavelet transform (CWT), a useful tool for the analysis of non-stationary signals such as impact noise. Together with new metric, two other conventional sound metrics related to sound modulation and fluctuation are also considered. In all, three sound metrics are employed to develop impact sound quality indexes for several specific impact courses on the road. Impact sounds are evaluated subjectively by 25 jurors. The indexes are verified by comparing the correlation between the index output and results of a subjective evaluation based on a jury test. These indexes are successfully applied to an objective evaluation for improvement of the impact sound quality for cases where some parts of the suspension system of the test car are modified.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
SU-E-I-71: Quality Assessment of Surrogate Metrics in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject tomore » random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be correlated with the performance of relevant atlas selection and ultimate label fusion.« less
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
Weissman, David E; Morrison, R Sean; Meier, Diane E
2010-02-01
Data collection and analysis are vital for strategic planning, quality improvement, and demonstration of palliative care program impact to hospital administrators, private funders and policymakers. Since 2000, the Center to Advance Palliative Care (CAPC) has provided technical assistance to hospitals, health systems and hospices working to start, sustain, and grow nonhospice palliative care programs. CAPC convened a consensus panel in 2008 to develop recommendations for specific clinical and customer metrics that programs should track. The panel agreed on four key domains of clinical metrics and two domains of customer metrics. Clinical metrics include: daily assessment of physical/psychological/spiritual symptoms by a symptom assessment tool; establishment of patient-centered goals of care; support to patient/family caregivers; and management of transitions across care sites. For customer metrics, consensus was reached on two domains that should be tracked to assess satisfaction: patient/family satisfaction, and referring clinician satisfaction. In an effort to ensure access to reliably high-quality palliative care data throughout the nation, hospital palliative care programs are encouraged to collect and report outcomes for each of the metric domains described here.
The role of complexity metrics in a multi-institutional dosimetry audit of VMAT
Agnew, Christina E; Hussein, Mohammad; Tsang, Yatman; McWilliam, Alan; Hounsell, Alan R; Clark, Catharine H
2016-01-01
Objective: To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery. Methods: 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius® phantom and seven29® 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared. Results: For Varian® linear accelerators (Varian® Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = −0.84, p < 0.01). Conclusion: MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality. Advances in knowledge: Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery. PMID:26511276
Weykamp, Cas; John, Garry; Gillery, Philippe; English, Emma; Ji, Linong; Lenters-Westra, Erna; Little, Randie R.; Roglic, Gojka; Sacks, David B.; Takei, Izumi
2016-01-01
Background A major objective of the IFCC Task Force on implementation of HbA1c standardization is to develop a model to define quality targets for HbA1c. Methods Two generic models, the Biological Variation and Sigma-metrics model, are investigated. Variables in the models were selected for HbA1c and data of EQA/PT programs were used to evaluate the suitability of the models to set and evaluate quality targets within and between laboratories. Results In the biological variation model 48% of individual laboratories and none of the 26 instrument groups met the minimum performance criterion. In the Sigma-metrics model, with a total allowable error (TAE) set at 5 mmol/mol (0.46% NGSP) 77% of the individual laboratories and 12 of 26 instrument groups met the 2 sigma criterion. Conclusion The Biological Variation and Sigma-metrics model were demonstrated to be suitable for setting and evaluating quality targets within and between laboratories. The Sigma-metrics model is more flexible as both the TAE and the risk of failure can be adjusted to requirements related to e.g. use for diagnosis/monitoring or requirements of (inter)national authorities. With the aim of reaching international consensus on advice regarding quality targets for HbA1c, the Task Force suggests the Sigma-metrics model as the model of choice with default values of 5 mmol/mol (0.46%) for TAE, and risk levels of 2 and 4 sigma for routine laboratories and laboratories performing clinical trials, respectively. These goals should serve as a starting point for discussion with international stakeholders in the field of diabetes. PMID:25737535
The role of complexity metrics in a multi-institutional dosimetry audit of VMAT.
McGarry, Conor K; Agnew, Christina E; Hussein, Mohammad; Tsang, Yatman; McWilliam, Alan; Hounsell, Alan R; Clark, Catharine H
2016-01-01
To demonstrate the benefit of complexity metrics such as the modulation complexity score (MCS) and monitor units (MUs) in multi-institutional audits of volumetric-modulated arc therapy (VMAT) delivery. 39 VMAT treatment plans were analysed using MCS and MU. A virtual phantom planning exercise was planned and independently measured using the PTW Octavius(®) phantom and seven29(®) 2D array (PTW-Freiburg GmbH, Freiburg, Germany). MCS and MU were compared with the median gamma index pass rates (2%/2 and 3%/3 mm) and plan quality. The treatment planning systems (TPS) were grouped by VMAT modelling being specifically designed for the linear accelerator manufacturer's own treatment delivery system (Type 1) or independent of vendor for VMAT delivery (Type 2). Differences in plan complexity (MCS and MU) between TPS types were compared. For Varian(®) linear accelerators (Varian(®) Medical Systems, Inc., Palo Alto, CA), MCS and MU were significantly correlated with gamma pass rates. Type 2 TPS created poorer quality, more complex plans with significantly higher MUs and MCS than Type 1 TPS. Plan quality was significantly correlated with MU for Type 2 plans. A statistically significant correlation was observed between MU and MCS for all plans (R = -0.84, p < 0.01). MU and MCS have a role in assessing plan complexity in audits along with plan quality metrics. Plan complexity metrics give some indication of plan deliverability but should be analysed with plan quality. Complexity metrics were investigated for a national rotational audit involving 34 institutions and they showed value. The metrics found that more complex plans were created for planning systems which were independent of vendor for VMAT delivery.
Rubbi, Ivan; Magnani, Daniela; Naldoni, Giada; Di Lorenzo, Rosaria; Cremonini, Valeria; Capucci, Patrizia; Artioli, Giovanna; Ferri, Paola
2016-11-22
Alzheimer's disease is the most common degenerative dementia with a predominantly senile onset. The difficult management of altered behaviour related to this disorder, poorly responsive to pharmacological treatments, has stimulated growth in non-pharmacological interventions, such as music therapy, whose effectiveness has not been supported by the literature up to now. The aim of this study was to evaluate the efficacy of video-music therapy on quality of life improvement in Patients affected by Alzheimer's Disease (AD). A pre-post study was conducted in a residential facility. 32 AD Patients, who attended this facility daily to participate in supportive and rehabilitative programs, were treated with 2 cycles of 6 video-music-therapy sessions, which consisted of folk music and video, recalling local traditions. In order to investigate their cognitive status, Mini Mental State Examination (MMSE) was administered and Patients were divided into stages according to MMSE scores. After each session of video-music-therapy, Quality of Life in Alzheimer's Disease Scale (QOL-AD) was administered to our Patients. 21 AD Patients completed the 2 cycles of video-music therapy. Among them, only the Patients with questionable, mild and moderate neurocognitive impairment (MMSE Stages 1, 2, 3) reported an improvement in their quality of life, whereas the Patients with severe deterioration (MMSE stage 4) did not report any change. Many items of QOL-AD improved, showing a statistically significantly correlation to each other. Video-music therapy was a valuable tool for improving the quality of life only in Patients affected by less severe neurocognitive impairment.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-22
... Registration for ``Healthy Young America Video Contest'' AGENCY: Office of the Secretary, Assistant Secretary...-sponsoring the ``Healthy Young America'' Video Contest with two primary goals: First, directly reaching the uninsured population through video views and votes; and second, the production of high-quality videos that...
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
A Novel Approach to High Definition, High-Contrast Video Capture in Abdominal Surgery
Cosman, Peter H.; Shearer, Christopher J.; Hugh, Thomas J.; Biankin, Andrew V.; Merrett, Neil D.
2007-01-01
Objective: The aim of this study was to define the best available option for video capture of surgical procedures for educational and archival purposes, with a view to identifying methods of capturing high-quality footage and identifying common pitfalls. Summary Background Data: Several options exist for those who wish to record operative surgical techniques on video. While high-end equipment is an unnecessary expense for most surgical units, several techniques are readily available that do not require industrial-grade audiovisual recording facilities, but not all are suited to every surgical application. Methods: We surveyed and evaluated the available technology for video capture in surgery. Our evaluation included analyses of video resolution, depth of field, contrast, exposure, image stability, and frame composition, as well as considerations of cost, accessibility, utility, feasibility, and economies of scale. Results: Several video capture options were identified, and the strengths and shortcomings of each were catalogued. None of the commercially available options was deemed suitable for high-quality video capture of abdominal surgical procedures. A novel application of off-the-shelf technology was devised to address these issues. Conclusions: Excellent quality video capture of surgical procedures within deep body cavities is feasible using commonly available equipment and technology, with minimal technical difficulty. PMID:17414600
An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices
NASA Astrophysics Data System (ADS)
Li, Houqiang; Wang, Yi; Chen, Chang Wen
2007-12-01
With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.
A comprehensive quality control workflow for paired tumor-normal NGS experiments.
Schroeder, Christopher M; Hilke, Franz J; Löffler, Markus W; Bitzer, Michael; Lenz, Florian; Sturm, Marc
2017-06-01
Quality control (QC) is an important part of all NGS data analysis stages. Many available tools calculate QC metrics from different analysis steps of single sample experiments (raw reads, mapped reads and variant lists). Multi-sample experiments, as sequencing of tumor-normal pairs, require additional QC metrics to ensure validity of results. These multi-sample QC metrics still lack standardization. We therefore suggest a new workflow for QC of DNA sequencing of tumor-normal pairs. With this workflow well-known single-sample QC metrics and additional metrics specific for tumor-normal pairs can be calculated. The segmentation into different tools offers a high flexibility and allows reuse for other purposes. All tools produce qcML, a generic XML format for QC of -omics experiments. qcML uses quality metrics defined in an ontology, which was adapted for NGS. All QC tools are implemented in C ++ and run both under Linux and Windows. Plotting requires python 2.7 and matplotlib. The software is available under the 'GNU General Public License version 2' as part of the ngs-bits project: https://github.com/imgag/ngs-bits. christopher.schroeder@med.uni-tuebingen.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION
Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...
Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.
2016-01-01
We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.
Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E
2016-09-08
The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be used as an independent standardized procedure for detector performance assessment. © 2016 The Authors.
Greene, Travis C.; Nishino, Thomas K.; Willis, Charles E.
2016-01-01
The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region‐of‐interest (ROI)‐based techniques to measure nonuniformity, minimum signal‐to‐noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX‐1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG‐150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG‐150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG‐150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG‐150 tests can be used as an independent standardized procedure for detector performance assessment. PACS number(s): 87.57.‐s, 87.57.C PMID:27685102
NASA Astrophysics Data System (ADS)
Portlock, J.; Laird, H.
2015-12-01
Communitopia, a 501(c)3 organization, uses humor, new media, and the short video format to engage and empower audiences and improve climate literacy. Our main project, the Don't Just Sit There - Do Something! video series (http://djst.tv), takes the complex subject of climate science, breaks it down into digestible nuggets of short, funny video, and couples it with easy actions viewers can take to make a difference. The series has 25 episodes so far, and more than 80,000 views on YouTube. We are reaching our target audience of high-school-age and adult viewers in the United States (94% of viewers are known to fit this demographic). Don't Just Sit There - Do Something! uses a strategic model for breaking through the fear and dread around climate change in the general population. It uses humor, positivity and brevity to frame the issue, and gives the audience simple actions designed to empower in each self-contained episode. We approach each piece of the climate puzzle with scientific rigor, and cite all our sources. Our approach is light-hearted and fun, because it is a more productive way to have a conversation about tough issues than scolding and guilt. The series is ongoing, and we are always focused on climate change. To determine the efficacy of our approach and efforts, we measure video views and other metrics through our YouTube channel, compile feedback and comments through YouTube and other social media outlets, and track actions taken through web metrics (click-through rates). We are also currently working with the Behavioral and Community Health Sciences Department at the University of Pittsburgh Graduate School of Public Health to evaluate the videos' impact. From August-October 2015, we are using an online survey to evaluate the Don't Just Sit There - Do Something! series. We will assess viewers' climate change education and awareness, commitment to support action steps that alleviate climate change, and inclination to support policy action before and after watching. Where possible, we have aligned survey questions with those of other groups, such as the Yale Project on Climate Change Communication, to better assess our survey population vs. the general population. We will share data about the benefits of using this novel approach for climate change communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Copeland, Alex; Brown, C. Titus
2011-10-13
DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... Elsevier, Quality & Metrics Department, Including Employees Located Throughout the United States Who Report to Miamisburg, OH; Lexis Nexis, a Subsidiary of Reed Elsevier, Quality & Metrics Department... Elsevier. The amended notice applicable to TA-W-80,205 and TA-W-80205A is hereby issued as follows: All...
Copeland, Alex; Brown, C. Titus
2018-04-27
DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.
Colometer: a real-time quality feedback system for screening colonoscopy.
Filip, Dobromir; Gao, Xuexin; Angulo-Rodríguez, Leticia; Mintchev, Martin P; Devlin, Shane M; Rostom, Alaa; Rosen, Wayne; Andrews, Christopher N
2012-08-28
To investigate the performance of a new software-based colonoscopy quality assessment system. The software-based system employs a novel image processing algorithm which detects the levels of image clarity, withdrawal velocity, and level of the bowel preparation in a real-time fashion from live video signal. Threshold levels of image blurriness and the withdrawal velocity below which the visualization could be considered adequate have initially been determined arbitrarily by review of sample colonoscopy videos by two experienced endoscopists. Subsequently, an overall colonoscopy quality rating was computed based on the percentage of the withdrawal time with adequate visualization (scored 1-5; 1, when the percentage was 1%-20%; 2, when the percentage was 21%-40%, etc.). In order to test the proposed velocity and blurriness thresholds, screening colonoscopy withdrawal videos from a specialized ambulatory colon cancer screening center were collected, automatically processed and rated. Quality ratings on the withdrawal were compared to the insertion in the same patients. Then, 3 experienced endoscopists reviewed the collected videos in a blinded fashion and rated the overall quality of each withdrawal (scored 1-5; 1, poor; 3, average; 5, excellent) based on 3 major aspects: image quality, colon preparation, and withdrawal velocity. The automated quality ratings were compared to the averaged endoscopist quality ratings using Spearman correlation coefficient. Fourteen screening colonoscopies were assessed. Adenomatous polyps were detected in 4/14 (29%) of the collected colonoscopy video samples. As a proof of concept, the Colometer software rated colonoscope withdrawal as having better visualization than the insertion in the 10 videos which did not have any polyps (average percent time with adequate visualization: 79% ± 5% for withdrawal and 50% ± 14% for insertion, P < 0.01). Withdrawal times during which no polyps were removed ranged from 4-12 min. The median quality rating from the automated system and the reviewers was 3.45 [interquartile range (IQR), 3.1-3.68] and 3.00 (IQR, 2.33-3.67) respectively for all colonoscopy video samples. The automated rating revealed a strong correlation with the reviewer's rating (ρ coefficient= 0.65, P = 0.01). There was good correlation of the automated overall quality rating and the mean endoscopist withdrawal speed rating (Spearman r coefficient= 0.59, P = 0.03). There was no correlation of automated overall quality rating with mean endoscopists image quality rating (Spearman r coefficient= 0.41, P = 0.15). The results from a novel automated real-time colonoscopy quality feedback system strongly agreed with the endoscopists' quality assessments. Further study is required to validate this approach.
Can Technology Improve the Quality of Colonoscopy?
Thirumurthi, Selvi; Ross, William A; Raju, Gottumukkala S
2016-07-01
In order for screening colonoscopy to be an effective tool in reducing colon cancer incidence, exams must be performed in a high-quality manner. Quality metrics have been presented by gastroenterology societies and now include higher adenoma detection rate targets than in the past. In many cases, the quality of colonoscopy can often be improved with simple low-cost interventions such as improved procedure technique, implementing split-dose bowel prep, and monitoring individuals' performances. Emerging technology has expanded our field of view and image quality during colonoscopy. We will critically review several technological advances in the context of quality metrics and discuss if technology can really improve the quality of colonoscopy.
YouTube as a source of patient information on gallstone disease.
Lee, Jun Suh; Seo, Ho Seok; Hong, Tae Ho
2014-04-14
To investigate the quality of YouTube videos on gallstone disease and to assess viewer response according to quality. A YouTube search was performed on September 18, 2013, using the keywords ''gallbladder disease'', ''gallstone disease'', and ''gallstone treatment''. Three researchers assessed the source, length, number of views, number of likes, and days since upload. The upload source was categorised as physician or hospital (PH), medical website or TV channel, commercial website (CW), or civilian. A usefulness score was devised to assess video quality and to categorise the videos into ''very useful'', ''useful'', ''slightly useful'', or ''not useful''. Videos with misleading content were categorised as ''misleading''. One hundred and thirty-one videos were analysed. Seventy-four videos (56.5%) were misleading, 36 (27.5%) were slightly useful, 15 (11.5%) were useful, three (2.3%) were very useful, and three (2.3%) were not useful. The number of mean likes (1.3 ± 1.5 vs 17.2 ± 38.0, P = 0.007) and number of views (756.3 ± 701.0 vs 8910.7 ± 17094.7, P = 0.001) were both significantly lower in the very useful group compared with the misleading group. All three very useful videos were PH videos. Among the 74 misleading videos, 64 (86.5%) were uploaded by a CW. There was no correlation between usefulness and the number of views, the number of likes, or the length. The "gallstone flush" was the method advocated most frequently by misleading videos (25.7%). More than half of the YouTube videos on gallstone disease are misleading. Credible videos uploaded by medical professionals and filtering by the staff of YouTube appear to be necessary.
The quality of video information on burn first aid available on YouTube.
Butler, Daniel P; Perry, Fiona; Shah, Zameer; Leon-Villapalos, Jorge
2013-08-01
To evaluate the clinical accuracy and delivery of information on thermal burn first aid available on the leading video-streaming website, YouTube. YouTube was searched using four separate search terms. The first 20 videos identified for each search term were included in the study if their primary focus was on thermal burn first aid. Videos were scored by two independent reviewers using a standardised scoring system and the scores totalled to give each video an overall score out of 20. A total of 47 videos were analysed. The average video score was 8.5 out of a possible 20. No videos scored full-marks. A low correlation was found between the score given by the independent reviewers and the number of views the video received per month (Spearman's rank correlation co-efficient=0.03, p=0.86). The current standard of videos covering thermal burn first aid available on YouTube is unsatisfactory. In addition to this, viewers do not appear to be drawn to videos of higher quality. Organisations involved in managing burns and providing first aid care should be encouraged to produce clear, structured videos that can be made available on leading video streaming websites. Copyright © 2012 Elsevier Ltd and ISBI. All rights reserved.
Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R
2013-07-01
The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.
Store-and-feedforward adaptive gaming system for hand-finger motion tracking in telerehabilitation.
Lockery, Daniel; Peters, James F; Ramanna, Sheela; Shay, Barbara L; Szturm, Tony
2011-05-01
This paper presents a telerehabilitation system that encompasses a webcam and store-and-feedforward adaptive gaming system for tracking finger-hand movement of patients during local and remote therapy sessions. Gaming-event signals and webcam images are recorded as part of a gaming session and then forwarded to an online healthcare content management system (CMS) that separates incoming information into individual patient records. The CMS makes it possible for clinicians to log in remotely and review gathered data using online reports that are provided to help with signal and image analysis using various numerical measures and plotting functions. Signals from a 6 degree-of-freedom magnetic motion tracking system provide a basis for video-game sprite control. The MMT provides a path for motion signals between common objects manipulated by a patient and a computer game. During a therapy session, a webcam that captures images of the hand together with a number of performance metrics provides insight into the quality, efficiency, and skill of a patient.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Fine pitch thermosonic wire bonding: analysis of state-of-the-art manufacturing capability
NASA Astrophysics Data System (ADS)
Cavasin, Daniel
1995-09-01
A comprehensive process characterization was performed at the Motorola plastic package assembly site in Selangor, Malaysia, to document the current fine pitch wire bond process capability, using state-of-the-art equipment, in an actual manufacturing environment. Two machines, representing the latest technology from two separate manufacturers, were operated one shift per day for five days, bonding a 132 lead Plastic Quad Flat Pack. Using a test device specifically designed for fine pitch wire bonding, the bonding programs were alternated between 107 micrometers and 92 micrometers pad pitch, running each pitch for a total of 1600 units per machine. Wire, capillary type, and related materials were standardized and commercially available. A video metrology measurement system, with a demonstrated six sigma repeatability band width of 0.51 micrometers , was utilized to measure the bonded units for bond dimensions and placement. Standard Quality Assurance (QA) metrics were also performed. Results indicate that state-of-the-art thermosonic wire bonding can achieve acceptable assembly yields at these fine pad pitches.
NASA Astrophysics Data System (ADS)
Ho, Chien-Peng; Yu, Jen-Yu; Lee, Suh-Yin
2011-12-01
Recent advances in modern television systems have had profound consequences for the scalability, stability, and quality of transmitted digital data signals. This is of particular significance for peer-to-peer (P2P) video-on-demand (VoD) related platforms, faced with an immediate and growing demand for reliable service delivery. In response to demands for high-quality video, the key objectives in the construction of the proposed framework were user satisfaction with perceived video quality and the effective utilization of available resources on P2P VoD networks. This study developed a peer-based promoter to support online advertising in P2P VoD networks based on an estimation of video distortion prior to the replication of data stream chunks. The proposed technology enables the recovery of lost video using replicated stream chunks in real time. Load balance is achieved by adjusting the replication level of each candidate group according to the degree-of-distortion, thereby enabling a significant reduction in server load and increased scalability in the P2P VoD system. This approach also promotes the use of advertising as an efficient tool for commercial promotion. Results indicate that the proposed system efficiently satisfies the given fault tolerances.
Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.
Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida
2016-06-28
During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang
2007-12-01
In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.
Pillai, Ajish; Menon, Radha; Oustecky, David; Ahmad, Asyia
2017-07-24
Quality of bowel preparation and patient knowledge remains a major barrier for completing colorectal cancer screening. Few studies have tested unique ways to impact patient understanding centering on interactive computer programs, pictures, and brochures. Two studies explored instructional videos but focused on patient compliance and anxiety as endpoints. Furthermore, excessive video length and content may limit their impact on a broad patient population. No study so far has studied a video's impact on preparation quality and patient understanding of the colonoscopy procedure. We conducted a single blinded prospective study of inner city patients presenting for a first time screening colonoscopy. During their initial visit patients were randomized to watch an instructional colonoscopy video or a video discussing gastroesophageal reflux disease (GERD). All patients watched a 6 minutes long video with the same spokesperson, completed a demographic questionnaire (Supplemental Digital Content 1, http://links.lww.com/JCG/A352) and were enrolled only if screened within 30 days of their visit. On the day of the colonoscopy, patients completed a 14 question quiz of their knowledge. Blinded endoscopist graded patient preparations based on the Ottawa scale. All authors had access to the study data and reviewed and approved the final manuscript. Among the 104 subjects enrolled in the study, 56 were in the colonoscopy video group, 48 were in GERD video group, and 12 were excluded. Overall, 48% were male and 52% female; 90% of patients had less than a high school education, 76% were African American, and 67% used a 4 L split-dose preparation. There were no differences between either video group with regard to any of the above categories. Comparisons between the 2 groups revealed that the colonoscopy video group had significantly better Ottawa bowel preparation score (4.77 vs. 6.85; P=0.01) than the GERD video group. The colonoscopy video group also had less-inadequate repeat bowel preparations versus the GERD video group (9% vs. 23%; P<0.01). The overall score on the knowledge questionnaire (Supplemental Digital Content 1, http://links.lww.com/JCG/A352) was significantly higher in the colonoscopy video group as compared with the GERD video group (12.77 vs. 11.08; P<0.001. In all patients the overall quiz score positively correlated with preparation quality (odds ratio, 2.31; confidence interval, 1.35-3.94; P<0.001). Our unique population represented an overwhelmingly under-educated (85% had a high school education or less) and minority group (76% African American). They are one of the most at risk of having multiple barriers such as comprehension and reading difficulties resulting in poor preparation examinations and no shows to procedures. Our instructional video proved to be high yield in this population. The patients assigned to watch the colonoscopy video showed a significant increase in "excellent" grade adequate bowel preparation quality by >23% and a significant decrease in "inadequate" bowel preparations by almost 50%. Our study proves that an educational video can improve both comprehension with regard to all aspects of colonoscopy. ClinicalTrials.gov number, NCT02906969.
Metric-driven harm: an exploration of unintended consequences of performance measurement.
Rambur, Betty; Vallett, Carol; Cohen, Judith A; Tarule, Jill Mattuck
2013-11-01
Performance measurement is an increasingly common element of the US health care system. Typically a proxy for high quality outcomes, there has been little systematic investigation of the potential negative unintended consequences of performance metrics, including metric-driven harm. This case study details an incidence of post-surgical metric-driven harm and offers Smith's 1995 work and a patient centered, context sensitive metric model for potential adoption by nurse researchers and clinicians. Implications for further research are discussed. © 2013.
Real-time demonstration hardware for enhanced DPCM video compression algorithm
NASA Technical Reports Server (NTRS)
Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.
1992-01-01
The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).
Komatireddy, Ravi; Chokshi, Anang; Basnett, Jeanna; Casale, Michael; Goble, Daniel; Shubert, Tiffany
2014-08-01
Tele-rehabiliation technologies that track human motion could enable physical therapy in the home. To be effective, these systems need to collect critical metrics without PT supervision both in real time and in a store and forward capacity. The first step of this process is to determine if PTs (PTs) are able to accurately assess the quality and quantity of an exercise repetition captured by a tele-rehabilitation platform. The purpose of this pilot project was to determine the level of agreement of quality and quantity of an exercise delivered and assessed by the Virtual Exercise Rehabilitation Assistant (VERA), and seven PTs. Ten healthy subjects were instructed by a PT in how to perform four lower extremity exercises. Subjects then performed each exercises delivered by VERA which counted repetitions and quality. Seven PTs independently reviewed video of each subject's session and assessed repetitions quality. The percent difference in total repetitions and analysis of the distribution of rating repetition quality was assessed between the VERA and PTs. The VERA counted 426 repetitions across 10 subjects performing the four different exercises while the mean repetition count from the PT panel was 426.7 (SD = 0.8). The VERA underestimated the total repetitions performed by 0.16% (SD = 0.03%, 95% CI 0.12 - 0. 22). Chi square analysis across raters was χ 2 = 63.17 (df = 6, p<.001), suggesting significant variance in at least one rater. The VERA count of repetitions was accurate in comparison to a seven member panel of PTs. For exercise quality the VERA was able to rate 426 exercise repetitions across 10 patients and four different exercises in a manner consistent with five out of seven experienced PTs.
Educational quality of YouTube videos on knee arthrocentesis.
Fischer, Jonas; Geurts, Jeroen; Valderrabano, Victor; Hügle, Thomas
2013-10-01
Knee arthrocentesis is a commonly performed diagnostic and therapeutic procedure in rheumatology and orthopedic surgery. Classic teaching of arthrocentesis skills relies on hands-on practice under supervision. Video-based online teaching is an increasingly utilized educational tool in higher and clinical education. YouTube is a popular video-sharing Web site that can be accessed as a teaching source. The objective of this study was to assess the educational value of YouTube videos on knee arthrocentesis posted by health professionals and institutions during the period from 2008 to 2012. The YouTube video database was systematically searched using 5 search terms related to knee arthrocentesis. Two independent clinical reviewers assessed videos for procedural technique and educational value using a 5-point global score, ranging from 1 = poor quality to 5 = excellent educational quality. As validated international guidelines are lacking, we used the guidelines of the Swiss Society of Rheumatology as criterion standard for the procedure. Of more than thousand findings, 13 videos met the inclusion criteria. Of those, 2 contained additional animated video material: one was purely animated, and one was a check list. The average length was 3.31 ± 2.28 minutes. The most popular video had 1388 hits per month. Our mean global score for educational value was 3.1 ± 1.0. Eight videos (62 %) were considered useful for teaching purposes. Use of a "no-touch" procedure, meaning that once disinfected the skin remains untouched before needle penetration, was present in all videos. Six videos (46%) demonstrated full sterile conditions. There was no clear preference of a medial (n = 8) versus lateral (n = 5) approach. A discreet number of YouTube videos on knee arthrocentesis appeared to be suitable for application in a Web-based format for medical students, fellows, and residents. The low-average mean global score for overall educational value suggests an improvement of future video-based instructional materials on YouTube would be necessary before regular use for teaching could be recommended.
Informative-frame filtering in endoscopy videos
NASA Astrophysics Data System (ADS)
An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny
2005-04-01
Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).
In the Public Interest: The Benefits of High Quality Child Care. [Videotape].
ERIC Educational Resources Information Center
Toronto Univ. (Ontario). Centre for Urban and Community Studies.
Noting that, in Canada, 10,000 child care programs serve children and families of diverse cultural and socioeconomic backgrounds, this video examines the characteristics and benefits of high quality programs. The 22-minute video first cites two reasons why quality child care is a current issue: the increasing number of women in the workforce and…
Using Digital Videos to Enhance Teacher Preparation
ERIC Educational Resources Information Center
Dymond, Stacy K.; Bentz, Johnell L.
2006-01-01
The technology to produce high quality, digital videos is widely available, yet its use in teacher preparation remains largely overlooked. A digital video library was created to augment instruction in a special education methods course for preservice elementary education teachers. The videos illustrated effective strategies for working with…
Spatial resampling of IDR frames for low bitrate video coding with HEVC
NASA Astrophysics Data System (ADS)
Hosking, Brett; Agrafiotis, Dimitris; Bull, David; Easton, Nick
2015-03-01
As the demand for higher quality and higher resolution video increases, many applications fail to meet this demand due to low bandwidth restrictions. One factor contributing to this problem is the high bitrate requirement of the intra-coded Instantaneous Decoding Refresh (IDR) frames featuring in all video coding standards. Frequent coding of IDR frames is essential for error resilience in order to prevent the occurrence of error propagation. However, as each one consumes a huge portion of the available bitrate, the quality of future coded frames is hindered by high levels of compression. This work presents a new technique, known as Spatial Resampling of IDR Frames (SRIF), and shows how it can increase the rate distortion performance by providing a higher and more consistent level of video quality at low bitrates.
Huang, Chiung-Yu; Chang, En-Ting; Hsieh, Yuan-Mei; Lai, Hui-Ling
2017-10-01
The present study aimed to compare the effects of music and music video interventions on objective and subjective sleep quality in adults with sleep disturbances. A randomized controlled trial was performed on 71 adults who were recruited from the outpatient department of a hospital with 1100 beds and randomly assigned to the control, music, and music video groups. During the 4 test days (Days 2-5), for 30min before nocturnal sleep, the music group listened to Buddhist music and the music video group watched Buddhist music videos. They were instructed to not listen/watch to the music/MV on the first night (pretest, Day 1) and the final night (Day 6). The control group received no intervention. Sleep was assessed using a one-channel electroencephalography machine in their homes and self-reported questionnaires. The music and music video interventions had no effect on any objective sleep parameters, as measured using electroencephalography. However, the music group had significantly longer subjective total sleep time than the music video group did (Wald χ 2 =6.23, p=0.04). Our study results increase knowledge regarding music interventions for sleep quality in adults with sleep disturbances. This study suggested that more research is required to strengthen the scientific knowledge of the effects of music intervention on sleep quality in adults with sleep disturbances. (ISRCTN94971645). Copyright © 2017 Elsevier Ltd. All rights reserved.
Bernard, Aaron W.; Ceccolini, Gabbriel; Feinn, Richard; Rockfeld, Jennifer; Rosenberg, Ilene; Thomas, Listy; Cassese, Todd
2017-01-01
ABSTRACT Background: Performance feedback is considered essential to clinical skills development. Formative objective structured clinical exams (F-OSCEs) often include immediate feedback by standardized patients. Students can also be provided access to performance metrics including scores, checklists, and video recordings after the F-OSCE to supplement this feedback. How often students choose to review this data and how review impacts future performance has not been documented. Objective: We suspect student review of F-OSCE performance data is variable. We hypothesize that students who review this data have better performance on subsequent F-OSCEs compared to those who do not. We also suspect that frequency of data review can be improved with faculty involvement in the form of student-faculty debriefing meetings. Design: Simulation recording software tracks and time stamps student review of performance data. We investigated a cohort of first- and second-year medical students from the 2015-16 academic year. Basic descriptive statistics were used to characterize frequency of data review and a linear mixed-model analysis was used to determine relationships between data review and future F-OSCE performance. Results: Students reviewed scores (64%), checklists (42%), and videos (28%) in decreasing frequency. Frequency of review of all metric and modalities improved when student-faculty debriefing meetings were conducted (p<.001). Among 92 first-year students, checklist review was associated with an improved performance on subsequent F-OSCEs (p = 0.038) by 1.07 percentage points on a scale of 0-100. Among 86 second year students, no review modality was associated with improved performance on subsequent F-OSCEs. Conclusion: Medical students review F-OSCE checklists and video recordings less than 50% of the time when not prompted. Student-faculty debriefing meetings increased student data reviews. First-year student’s review of checklists on F-OSCEs was associated with increases in performance on subsequent F-OSCEs, however this outcome was not observed among second-year students. PMID:28521646
Bernard, Aaron W; Ceccolini, Gabbriel; Feinn, Richard; Rockfeld, Jennifer; Rosenberg, Ilene; Thomas, Listy; Cassese, Todd
2017-01-01
Performance feedback is considered essential to clinical skills development. Formative objective structured clinical exams (F-OSCEs) often include immediate feedback by standardized patients. Students can also be provided access to performance metrics including scores, checklists, and video recordings after the F-OSCE to supplement this feedback. How often students choose to review this data and how review impacts future performance has not been documented. We suspect student review of F-OSCE performance data is variable. We hypothesize that students who review this data have better performance on subsequent F-OSCEs compared to those who do not. We also suspect that frequency of data review can be improved with faculty involvement in the form of student-faculty debriefing meetings. Simulation recording software tracks and time stamps student review of performance data. We investigated a cohort of first- and second-year medical students from the 2015-16 academic year. Basic descriptive statistics were used to characterize frequency of data review and a linear mixed-model analysis was used to determine relationships between data review and future F-OSCE performance. Students reviewed scores (64%), checklists (42%), and videos (28%) in decreasing frequency. Frequency of review of all metric and modalities improved when student-faculty debriefing meetings were conducted (p<.001). Among 92 first-year students, checklist review was associated with an improved performance on subsequent F-OSCEs (p = 0.038) by 1.07 percentage points on a scale of 0-100. Among 86 second year students, no review modality was associated with improved performance on subsequent F-OSCEs. Medical students review F-OSCE checklists and video recordings less than 50% of the time when not prompted. Student-faculty debriefing meetings increased student data reviews. First-year student's review of checklists on F-OSCEs was associated with increases in performance on subsequent F-OSCEs, however this outcome was not observed among second-year students.
Hadjisolomou, Stavros P.; El-Haddad, George
2017-01-01
Coleoid cephalopods (squid, octopus, and sepia) are renowned for their elaborate body patterning capabilities, which are employed for camouflage or communication. The specific chromatic appearance of a cephalopod, at any given moment, is a direct result of the combined action of their intradermal pigmented chromatophore organs and reflecting cells. Therefore, a lot can be learned about the cephalopod coloration system by video recording and analyzing the activation of individual chromatophores in time. The fact that adult cephalopods have small chromatophores, up to several hundred thousand in number, makes measurement and analysis over several seconds a difficult task. However, current advancements in videography enable high-resolution and high framerate recording, which can be used to record chromatophore activity in more detail and accuracy in both space and time domains. In turn, the additional pixel information and extra frames per video from such recordings result in large video files of several gigabytes, even when the recording spans only few minutes. We created a software plugin, “SpotMetrics,” that can automatically analyze high resolution, high framerate video of chromatophore organ activation in time. This image analysis software can track hundreds of individual chromatophores over several hundred frames to provide measurements of size and color. This software may also be used to measure differences in chromatophore activation during different behaviors which will contribute to our understanding of the cephalopod sensorimotor integration system. In addition, this software can potentially be utilized to detect numbers of round objects and size changes in time, such as eye pupil size or number of bacteria in a sample. Thus, we are making this software plugin freely available as open-source because we believe it will be of benefit to other colleagues both in the cephalopod biology field and also within other disciplines. PMID:28298896
High-quality cardiopulmonary resuscitation: current and future directions.
Abella, Benjamin S
2016-06-01
Cardiopulmonary resuscitation (CPR) represents the cornerstone of cardiac arrest resuscitation care. Prompt delivery of high-quality CPR can dramatically improve survival outcomes; however, the definitions of optimal CPR have evolved over several decades. The present review will discuss the metrics of CPR delivery, and the evidence supporting the importance of CPR quality to improve clinical outcomes. The introduction of new technologies to quantify metrics of CPR delivery has yielded important insights into CPR quality. Investigations using CPR recording devices have allowed the assessment of specific CPR performance parameters and their relative importance regarding return of spontaneous circulation and survival to hospital discharge. Additional work has suggested new opportunities to measure physiologic markers during CPR and potentially tailor CPR delivery to patient requirements. Through recent laboratory and clinical investigations, a more evidence-based definition of high-quality CPR continues to emerge. Exciting opportunities now exist to study quantitative metrics of CPR and potentially guide resuscitation care in a goal-directed fashion. Concepts of high-quality CPR have also informed new approaches to training and quality improvement efforts for cardiac arrest care.
Calvin J. Maginel; Benjamin O. Knapp; John M. Kabrick; Rose-Marie Muzika
2016-01-01
Monitoring is a critical component of ecological restoration and requires the use of metrics that are meaningful and interpretable. We analyzed the effectiveness of the Floristic Quality Index (FQI), a vegetative community metric based on species richness and the level of sensitivity to anthropogenic disturbance of individual species present (Coefficient of...
Methods of Measurement the Quality Metrics in a Printing System
NASA Astrophysics Data System (ADS)
Varepo, L. G.; Brazhnikov, A. Yu; Nagornova, I. V.; Novoselskaya, O. A.
2018-04-01
One of the main criteria for choosing ink as a component of printing system is scumming ability of the ink. The realization of algorithm for estimating the quality metrics in a printing system is shown. The histograms of ink rate of various printing systems are presented. A quantitative estimation of stability of offset inks emulsifiability is given.
Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1989-01-01
Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.
Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A.
1991-01-01
Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.
Wang, Yan; Yang, Yue-Chang; Lan, Dan-Mei; Wu, Hui -Juan; Zhao, Zhong-Xin
2017-05-01
Sleep disturbance is common in Parkinson's disease (PD) and negatively impacts quality of life. There is little data on how dopamine agonists influence nocturnal sleep in PD, particularly in sleep laboratory data to measure sleep parameters and their changes objectively. The goal of this open-label study was to objectively evaluate the effect of rotigotine on sleep in PD patients by video-polysomnographic methods. A total of 25 PD patients with complaints of nocturnal sleep impairment were enrolled. The sleep quality before and after stable rotigotine therapy was evaluated subjectively through questionnaire assessments and objectively measured by video-polysomnographic methods. The Parkinsonism, depression, anxiety, and quality of life of PD patients were also evaluated through questionnaire assessments. At the end of rotigotine treatment, the PD daytime functioning, motor performance, depression, subjective quality of sleep, and the quality of life improved. Video-polysomnographic analysis showed that the sleep efficiency and stage N1% were increased, while the sleep latency, wake after sleep onset, and the periodic leg movements in sleep index were decreased after rotigotine treatment. Video-polysomnographic analysis confirmed the subjective improvement of sleep after rotigotine treatment. This observation suggests that in PD rotigotine is a treatment option for patients complaining from sleep disturbances.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-08
... monitor trends on an annual basis. To continue our time-series analysis, we request data as of June 30... information and time- series data we should collect for the analysis of various MVPD performance metrics. In... revenues, cash flows, and margins. To the extent possible, we seek five-year time-series data to allow us...
Pragmatic quality metrics for evolutionary software development models
NASA Technical Reports Server (NTRS)
Royce, Walker
1990-01-01
Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.
NASA Astrophysics Data System (ADS)
Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio
2014-02-01
High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.
Veterinary students' usage and perception of video teaching resources.
Roshier, Amanda L; Foster, Neil; Jones, Michael A
2011-01-10
The purpose of our study was to use a student-centred approach to develop an online video learning resource (called 'Moo Tube') at the School of Veterinary Medicine and Science, University of Nottingham, UK and also to provide guidance for other academics in the School wishing to develop a similar resource in the future. A focus group in the format of the nominal group technique was used to garner the opinions of 12 undergraduate students (3 from year-1, 4 from year-2 and 5 from year-3). Students generated lists of items in response to key questions, these responses were thematically analysed to generate key themes which were compared between the different year groups. The number of visits to 'Moo Tube' before and after an objective structured practical examination (OSPE) was also analysed to provide data on video usage. Students highlighted a number of strengths of video resources which can be grouped into four overarching themes: (1) teaching enhancement, (2) accessibility, (3) technical quality and (4) video content. Of these themes, students rated teaching enhancement and accessibility most highly. Video usage was seen to significantly increase (P < 0.05) prior to an examination and significantly decrease (P < 0.05) following the examination. The students had a positive perception of video usage in higher education. Video usage increases prior to practical examinations. Image quality was a greater concern with year-3 students than with either year-1 or 2 students but all groups highlighted the following as important issues: i) good sound quality, ii) accessibility, including location of videos within electronic libraries, and iii) video content. Based on the findings from this study, guidelines are suggested for those developing undergraduate veterinary videos. We believe that many aspects of our list will have resonance in other areas of medicine education and higher education.
Quantification of Behavioral Stereotypy in Flies
NASA Astrophysics Data System (ADS)
Manley, Jason; Berman, Gordon; Shaevitz, Joshua
A commonly accepted assumption in the study of behavior is that an organism's behavioral repertoire can be represented by a relatively small set of stereotyped actions. Here, ``stereotypy'' is defined as a measure of the similarity of repetitions of a behavior. Our group utilizes data-driven analyses on videos of ground-based Drosophila to organize the set of spontaneous behaviors into a two-dimensional map, or behavioral space. We utilize this framework to define a metric for behavioral stereotypy. This measure quantifies the variance in a given behavior's periodic trajectory through a space representing its postural degrees of freedom. This newly developed behavioral metric has confirmed a high degree of stereotypy among most behaviors and we correlate stereotypy with various physiological effects.
Learning to Swim Using Video Modelling and Video Feedback within a Self-Management Program
ERIC Educational Resources Information Center
Lao, So-An; Furlonger, Brett E.; Moore, Dennis W.; Busacca, Margherita
2016-01-01
Although many adults who cannot swim are primarily interested in learning by direct coaching there are options that have a focus on self-directed learning. As an alternative a self-management program combined with video modelling, video feedback and high quality and affordable video technology was used to assess its effectiveness to assisting an…
Hussain, Husniza; Khalid, Norhayati Mustafa; Selamat, Rusidah; Wan Nazaimoon, Wan Mohamud
2013-09-01
The urinary iodine micromethod (UIMM) is a modification of the conventional method and its performance needs evaluation. UIMM performance was evaluated using the method validation and 2008 Iodine Deficiency Disorders survey data obtained from four urinary iodine (UI) laboratories. Method acceptability tests and Sigma quality metrics were determined using total allowable errors (TEas) set by two external quality assurance (EQA) providers. UIMM obeyed various method acceptability test criteria with some discrepancies at low concentrations. Method validation data calculated against the UI Quality Program (TUIQP) TEas showed that the Sigma metrics were at 2.75, 1.80, and 3.80 for 51±15.50 µg/L, 108±32.40 µg/L, and 149±38.60 µg/L UI, respectively. External quality control (EQC) data showed that the performance of the laboratories was within Sigma metrics of 0.85-1.12, 1.57-4.36, and 1.46-4.98 at 46.91±7.05 µg/L, 135.14±13.53 µg/L, and 238.58±17.90 µg/L, respectively. No laboratory showed a calculated total error (TEcalc)
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
Sleep quality is negatively related to video gaming volume in adults.
Exelmans, Liese; Van den Bulck, Jan
2015-04-01
Most literature on the relationship between video gaming and sleep disturbances has looked at children and adolescents. There is little research on such a relationship in adult samples. The aim of the current study was to investigate the association of video game volume with sleep quality in adults via face-to-face interviews using standardized questionnaires. Adults (n = 844, 56.2% women), aged 18-94 years old, participated in the study. Sleep quality was measured using the Pittsburgh Sleep Quality Index, and gaming volume was assessed by asking the hours of gaming on a regular weekday (Mon-Thurs), Friday and weekend day (Sat-Sun). Adjusting for gender, age, educational level, exercise and perceived stress, results of hierarchical regression analyses indicated that video gaming volume was a significant predictor of sleep quality (β = 0.145), fatigue (β = 0.109), insomnia (β = 0.120), bedtime (β = 0.100) and rise time (β = 0.168). Each additional hour of video gaming per day delayed bedtime by 6.9 min (95% confidence interval 2.0-11.9 min) and rise time by 13.8 min (95% confidence interval 7.8-19.7 min). Attributable risk for having poor sleep quality (Pittsburgh Sleep Quality Index > 5) due to gaming >1 h day was 30%. When examining the components of the Pittsburgh Sleep Quality Index using multinomial regression analysis (odds ratios with 95% confidence intervals), gaming volume significantly predicted sleep latency, sleep efficiency and use of sleep medication. In general, findings support the conclusion that gaming volume is negatively related to the overall sleep quality of adults, which might be due to underlying mechanisms of screen exposure and arousal. © 2014 European Sleep Research Society.
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Low-latency situational awareness for UxV platforms
NASA Astrophysics Data System (ADS)
Berends, David C.
2012-06-01
Providing high quality, low latency video from unmanned vehicles through bandwidth-limited communications channels remains a formidable challenge for modern vision system designers. SRI has developed a number of enabling technologies to address this, including the use of SWaP-optimized Systems-on-a-Chip which provide Multispectral Fusion and Contrast Enhancement as well as H.264 video compression. Further, the use of salience-based image prefiltering prior to image compression greatly reduces output video bandwidth by selectively blurring non-important scene regions. Combined with our customization of the VLC open source video viewer for low latency video decoding, SRI developed a prototype high performance, high quality vision system for UxV application in support of very demanding system latency requirements and user CONOPS.
Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai
2013-05-01
Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.
Viewer discretion advised: is YouTube a friend or foe in surgical education?
Rodriguez, H Alejandro; Young, Monica T; Jackson, Hope T; Oelschlager, Brant K; Wright, Andrew S
2018-04-01
In the current era, trainees frequently use unvetted online resources for their own education, including viewing surgical videos on YouTube. While operative videos are an important resource in surgical education, YouTube content is not selected or organized by quality but instead is ranked by popularity and other factors. This creates a potential for videos that feature poor technique or critical safety violations to become the most viewed for a given procedure. A YouTube search for "Laparoscopic cholecystectomy" was performed. Search results were screened to exclude animations and lectures; the top ten operative videos were evaluated. Three reviewers independently analyzed each of the 10 videos. Technical skill was rated using the GOALS score. Establishment of a critical view of safety (CVS) was scored according to CVS "doublet view" score, where a score of ≥5 points (out of 6) is considered satisfactory. Videos were also screened for safety concerns not listed by the previous tools. Median competence score was 8 (±1.76) and difficulty was 2 (±1.8). GOALS score median was 18 (±3.4). Only one video achieved adequate critical view of safety; median CVS score was 2 (range 0-6). Five videos were noted to have other potentially dangerous safety violations, including placing hot ultrasonic shears on the duodenum, non-clipping of the cystic artery, blind dissection in the hepatocystic triangle, and damage to the liver capsule. Top ranked laparoscopic cholecystectomy videos on YouTube show suboptimal technique with half of videos demonstrating concerning maneuvers and only one in ten having an adequate critical view of safety. While observing operative videos can be an important learning tool, surgical educators should be aware of the low quality of popular videos on YouTube. Dissemination of high-quality content on video sharing platforms should be a priority for surgical societies.
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
Toward determining melt pool quality metrics via coaxial monitoring in laser powder bed fusion.
Fisher, Brian A; Lane, Brandon; Yeung, Ho; Beuth, Jack
2018-01-01
The current industry trend in metal additive manufacturing is towards greater real time process monitoring capabilities during builds to ensure high quality parts. While the hardware implementations that allow for real time monitoring of the melt pool have advanced significantly, the knowledge required to correlate the generated data to useful metrics of interest are still lacking. This research presents promising results that aim to bridge this knowledge gap by determining a novel means to correlate easily obtainable sensor data (thermal emission) to key melt pool size metrics (e.g., melt pool cross sectional area).
Berkowitz, Seth A; Aragon, Katherine; Hines, Jonas; Seligman, Hilary; Lee, Sei; Sarkar, Urmimala
2013-01-01
Objective To determine whether diabetes clinical standards consider increased hypoglycemia risk in vulnerable patients. Data Sources MEDLINE, the National Guidelines Clearinghouse, the National Quality Measures Clearinghouse, and supplemental sources. Study Design Systematic review of clinical standards (guidelines, quality metrics, or pay-for-performance programs) for glycemic control in adult diabetes patients. The primary outcome was discussion of increased risk for hypoglycemia in vulnerable populations. Data Collection/Extraction Methods Manuscripts identified were abstracted by two independent reviewers using prespecified inclusion/exclusion criteria and a standardized abstraction form. Principal Findings We screened 1,166 titles, and reviewed 220 manuscripts in full text. Forty-four guidelines, 17 quality metrics, and 8 pay-for-performance programs were included. Five (11 percent) guidelines and no quality metrics or pay-for-performance programs met the primary outcome. Conclusions Clinical standards do not substantively incorporate evidence about increased risk for hypoglycemia in vulnerable populations. PMID:23445498
Berkowitz, Seth A; Aragon, Katherine; Hines, Jonas; Seligman, Hilary; Lee, Sei; Sarkar, Urmimala
2013-08-01
To determine whether diabetes clinical standards consider increased hypoglycemia risk in vulnerable patients. MEDLINE, the National Guidelines Clearinghouse, the National Quality Measures Clearinghouse, and supplemental sources. Systematic review of clinical standards (guidelines, quality metrics, or pay-for-performance programs) for glycemic control in adult diabetes patients. The primary outcome was discussion of increased risk for hypoglycemia in vulnerable populations. Manuscripts identified were abstracted by two independent reviewers using prespecified inclusion/exclusion criteria and a standardized abstraction form. We screened 1,166 titles, and reviewed 220 manuscripts in full text. Forty-four guidelines, 17 quality metrics, and 8 pay-for-performance programs were included. Five (11 percent) guidelines and no quality metrics or pay-for-performance programs met the primary outcome. Clinical standards do not substantively incorporate evidence about increased risk for hypoglycemia in vulnerable populations. © Health Research and Educational Trust.
Quality Measures for Dialysis: Time for a Balanced Scorecard
2016-01-01
Recent federal legislation establishes a merit-based incentive payment system for physicians, with a scorecard for each professional. The Centers for Medicare and Medicaid Services evaluate quality of care with clinical performance measures and have used these metrics for public reporting and payment to dialysis facilities. Similar metrics may be used for the future merit-based incentive payment system. In nephrology, most clinical performance measures measure processes and intermediate outcomes of care. These metrics were developed from population studies of best practice and do not identify opportunities for individualizing care on the basis of patient characteristics and individual goals of treatment. The In-Center Hemodialysis (ICH) Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey examines patients' perception of care and has entered the arena to evaluate quality of care. A balanced scorecard of quality performance should include three elements: population-based best clinical practice, patient perceptions, and individually crafted patient goals of care. PMID:26316622
Subjective video quality evaluation of different content types under different impairments
NASA Astrophysics Data System (ADS)
Pozueco, Laura; Álvarez, Alberto; García, Xabiel; García, Roberto; Melendi, David; Díaz, Gabriel
2017-01-01
Nowadays, access to multimedia content is one of the most demanded services on the Internet. However, the transmission of audio and video over these networks is not free of problems that negatively affect user experience. Factors such as low image quality, cuts during playback or losses of audio or video, among others, can occur and there is no clear idea about the level of distortion introduced in the perceived quality. For that reason, different impairments should be evaluated based on user opinions, with the aim of analyzing the impact in the perceived quality. In this work, we carried out a subjective evaluation of different types of impairments with different types of contents, including news, cartoons, sports and action movies. A total of 100 individuals, between the ages of 20 and 68, participated in the subjective study. Results show that short-term rebuffering events negatively affect the quality of experience and that desynchronization between audio and video is the least annoying impairment. Moreover, we found that the content type determines the subjective results according to the impairment present during the playback.
Wood, T J; Beavis, A W; Saunderson, J R
2013-01-01
Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362
MobileASL: intelligibility of sign language video over mobile phones.
Cavender, Anna; Vanam, Rahul; Barney, Dane K; Ladner, Richard E; Riskin, Eve A
2008-01-01
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Carter, Stephen J.; Langer, Steven G.; Andrew, Rex K.
1998-06-01
Experiments using NASA's Advanced Communications Technology Satellite were conducted to provide an estimate of the compressed video quality required for preservation of clinically relevant features for the detection of trauma. Bandwidth rates of 128, 256 and 384 kbps were used. A five point Likert scale (1 equals no useful information and 5 equals good diagnostic quality) was used for a subjective preference questionnaire to evaluate the quality of the compressed ultrasound imagery at the three compression rates for several anatomical regions of interest. At 384 kbps the Likert scores (mean plus or minus SD) were abdomen (4.45 plus or minus 0.71), carotid artery (4.70 plus or minus 0.36), kidney (5.0 plus or minus 0.0), liver (4.67 plus or minus 0.58) and thyroid (4.03 plus or minus 0.74). Due to the volatile nature of the H.320 compressed digital video stream, no statistically significant results can be derived through this methodology. As the MPEG standard has at its roots many of the same intraframe and motion vector compression algorithms as the H.261 (such as that used in the previous ACTS/AMT experiments), we are using the MPEG compressed video sequences to best gauge what minimum bandwidths are necessary for preservation of clinically relevant features for the detection of trauma. We have been using an MPEG codec board to collect losslessly compressed video clips from high quality S- VHS tapes and through direct digitization of S-video. Due to the large number of videoclips and questions to be presented to the radiologists and for ease of application, we have developed a web browser interface for this video visual perception study. Due to the large numbers of observations required to reach statistical significance in most ROC studies, Kappa statistical analysis is used to analyze the degree of agreement between observers and between viewing assessment. If the degree of agreement amongst readers is high, then there is a possibility that the ratings (i.e., average Likert score at each bandwidth) do in fact reflect the dimension they are purported to reflect (video quality versus bandwidth). It is then possible to make intelligent choice of bandwidth for streaming compressed video and compressed videoclips.
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
Exploiting spatio-temporal characteristics of human vision for mobile video applications
NASA Astrophysics Data System (ADS)
Jillani, Rashad; Kalva, Hari
2008-08-01
Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes, and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we propose a saliency based framework that exploits the structure in content creation as well as the human vision system to find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we demonstrated how such a framework can affect user experience on a handheld device.
Automated Video Quality Assessment for Deep-Sea Video
NASA Astrophysics Data System (ADS)
Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.
2015-12-01
Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.
Related Critical Psychometric Issues and Their Resolutions during Development of PE Metrics
ERIC Educational Resources Information Center
Fox, Connie; Zhu, Weimo; Park, Youngsik; Fisette, Jennifer L.; Graber, Kim C.; Dyson, Ben; Avery, Marybell; Franck, Marian; Placek, Judith H.; Rink, Judy; Raynes, De
2011-01-01
In addition to validity and reliability evidence, other psychometric qualities of the PE Metrics assessments needed to be examined. This article describes how those critical psychometric issues were addressed during the PE Metrics assessment bank construction. Specifically, issues included (a) number of items or assessments needed, (b) training…
National Quality Forum Colon Cancer Quality Metric Performance: How Are Hospitals Measuring Up?
Mason, Meredith C; Chang, George J; Petersen, Laura A; Sada, Yvonne H; Tran Cao, Hop S; Chai, Christy; Berger, David H; Massarweh, Nader N
2017-12-01
To evaluate the impact of care at high-performing hospitals on the National Quality Forum (NQF) colon cancer metrics. The NQF endorses evaluating ≥12 lymph nodes (LNs), adjuvant chemotherapy (AC) for stage III patients, and AC within 4 months of diagnosis as colon cancer quality indicators. Data on hospital-level metric performance and the association with survival are unclear. Retrospective cohort study of 218,186 patients with resected stage I to III colon cancer in the National Cancer Data Base (2004-2012). High-performing hospitals (>75% achievement) were identified by the proportion of patients achieving each measure. The association between hospital performance and survival was evaluated using Cox shared frailty modeling. Only hospital LN performance improved (15.8% in 2004 vs 80.7% in 2012; trend test, P < 0.001), with 45.9% of hospitals performing well on all 3 measures concurrently in the most recent study year. Overall, 5-year survival was 75.0%, 72.3%, 72.5%, and 69.5% for those treated at hospitals with high performance on 3, 2, 1, and 0 metrics, respectively (log-rank, P < 0.001). Care at hospitals with high metric performance was associated with lower risk of death in a dose-response fashion [0 metrics, reference; 1, hazard ratio (HR) 0.96 (0.89-1.03); 2, HR 0.92 (0.87-0.98); 3, HR 0.85 (0.80-0.90); 2 vs 1, HR 0.96 (0.91-1.01); 3 vs 1, HR 0.89 (0.84-0.93); 3 vs 2, HR 0.95 (0.89-0.95)]. Performance on metrics in combination was associated with lower risk of death [LN + AC, HR 0.86 (0.78-0.95); AC + timely AC, HR 0.92 (0.87-0.98); LN + AC + timely AC, HR 0.85 (0.80-0.90)], whereas individual measures were not [LN, HR 0.95 (0.88-1.04); AC, HR 0.95 (0.87-1.05)]. Less than half of hospitals perform well on these NQF colon cancer metrics concurrently, and high performance on individual measures is not associated with improved survival. Quality improvement efforts should shift focus from individual measures to defining composite measures encompassing the overall multimodal care pathway and capturing successful transitions from one care modality to another.
Operation quality assessment model for video conference system
NASA Astrophysics Data System (ADS)
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
2018-01-01
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry
2011-01-01
Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the U.S. National Cancer Institute (NCI) convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: (1) an evolving list of comprehensive quality metrics and (2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22053864
Kinsinger, Christopher R.; Apffel, James; Baker, Mark; Bian, Xiaopeng; Borchers, Christoph H.; Bradshaw, Ralph; Brusniak, Mi-Youn; Chan, Daniel W.; Deutsch, Eric W.; Domon, Bruno; Gorman, Jeff; Grimm, Rudolf; Hancock, William; Hermjakob, Henning; Horn, David; Hunter, Christie; Kolar, Patrik; Kraus, Hans-Joachim; Langen, Hanno; Linding, Rune; Moritz, Robert L.; Omenn, Gilbert S.; Orlando, Ron; Pandey, Akhilesh; Ping, Peipei; Rahbar, Amir; Rivers, Robert; Seymour, Sean L.; Simpson, Richard J.; Slotta, Douglas; Smith, Richard D.; Stein, Stephen E.; Tabb, David L.; Tagle, Danilo; Yates, John R.; Rodriguez, Henry
2011-01-01
Policies supporting the rapid and open sharing of proteomic data are being implemented by the leading journals in the field. The proteomics community is taking steps to ensure that data are made publicly accessible and are of high quality, a challenging task that requires the development and deployment of methods for measuring and documenting data quality metrics. On September 18, 2010, the United States National Cancer Institute convened the “International Workshop on Proteomic Data Quality Metrics” in Sydney, Australia, to identify and address issues facing the development and use of such methods for open access proteomics data. The stakeholders at the workshop enumerated the key principles underlying a framework for data quality assessment in mass spectrometry data that will meet the needs of the research community, journals, funding agencies, and data repositories. Attendees discussed and agreed up on two primary needs for the wide use of quality metrics: 1) an evolving list of comprehensive quality metrics and 2) standards accompanied by software analytics. Attendees stressed the importance of increased education and training programs to promote reliable protocols in proteomics. This workshop report explores the historic precedents, key discussions, and necessary next steps to enhance the quality of open access data. By agreement, this article is published simultaneously in the Journal of Proteome Research, Molecular and Cellular Proteomics, Proteomics, and Proteomics Clinical Applications as a public service to the research community. The peer review process was a coordinated effort conducted by a panel of referees selected by the journals. PMID:22052993
For Video Games, Bad News Is Good News: News Reporting of Violent Video Game Studies.
Copenhaver, Allen; Mitrofan, Oana; Ferguson, Christopher J
2017-12-01
News coverage of video game violence studies has been critiqued for focusing mainly on studies supporting negative effects and failing to report studies that did not find evidence for such effects. These concerns were tested in a sample of 68 published studies using child and adolescent samples. Contrary to our hypotheses, study effect size was not a predictor of either newspaper coverage or publication in journals with a high-impact factor. However, a relationship between poorer study quality and newspaper coverage approached significance. High-impact journals were not found to publish studies with higher quality. Poorer quality studies, which tended to highlight negative findings, also received more citations in scholarly sources. Our findings suggest that negative effects of violent video games exposure in children and adolescents, rather than large effect size or high methodological quality, increase the likelihood of a study being cited in other academic publications and subsequently receiving news media coverage.
Pharmacy Dashboard: An Innovative Process for Pharmacy Workload and Productivity.
Kinney, Ashley; Bui, Quyen; Hodding, Jane; Le, Jennifer
2017-03-01
Background: Innovative approaches, including LEAN systems and dashboards, to enhance pharmacy production continue to evolve in a cost and safety conscious health care environment. Furthermore, implementing and evaluating the effectiveness of these novel methods continues to be challenging for pharmacies. Objective: To describe a comprehensive, real-time pharmacy dashboard that incorporated LEAN methodologies and evaluate its utilization in an inpatient Central Intravenous Additives Services (CIVAS) pharmacy. Methods: Long Beach Memorial Hospital (462 adult beds) and Miller Children's and Women's Hospital of Long Beach (combined 324 beds) are tertiary not-for-profit, community-based hospitals that are served by one CIVAS pharmacy. Metrics to evaluate the effectiveness of CIVAS were developed and implemented on a dashboard in real-time from March 2013 to March 2014. Results: The metrics that were designed and implemented to evaluate the effectiveness of CIVAS were quality and value, financial resilience, and the department's people and culture. Using a dashboard that integrated these metrics, the accuracy of manufacturing defect-free products was ≥99.9%, indicating excellent quality and value of CIVAS. The metric for financial resilience demonstrated a cost savings of $78,000 annually within pharmacy by eliminating the outsourcing of products. People and value metrics on the dashboard focused on standard work, with an overall 94.6% compliance to the workflow. Conclusion: A unique dashboard that incorporated metrics to monitor 3 important areas was successfully implemented to improve the effectiveness of CIVAS pharmacy. These metrics helped pharmacy to monitor progress in real-time, allowing attainment of production goals and fostering continuous quality improvement through LEAN work.
Pharmacy Dashboard: An Innovative Process for Pharmacy Workload and Productivity
Bui, Quyen; Hodding, Jane; Le, Jennifer
2017-01-01
Background: Innovative approaches, including LEAN systems and dashboards, to enhance pharmacy production continue to evolve in a cost and safety conscious health care environment. Furthermore, implementing and evaluating the effectiveness of these novel methods continues to be challenging for pharmacies. Objective: To describe a comprehensive, real-time pharmacy dashboard that incorporated LEAN methodologies and evaluate its utilization in an inpatient Central Intravenous Additives Services (CIVAS) pharmacy. Methods: Long Beach Memorial Hospital (462 adult beds) and Miller Children's and Women's Hospital of Long Beach (combined 324 beds) are tertiary not-for-profit, community-based hospitals that are served by one CIVAS pharmacy. Metrics to evaluate the effectiveness of CIVAS were developed and implemented on a dashboard in real-time from March 2013 to March 2014. Results: The metrics that were designed and implemented to evaluate the effectiveness of CIVAS were quality and value, financial resilience, and the department's people and culture. Using a dashboard that integrated these metrics, the accuracy of manufacturing defect-free products was ≥99.9%, indicating excellent quality and value of CIVAS. The metric for financial resilience demonstrated a cost savings of $78,000 annually within pharmacy by eliminating the outsourcing of products. People and value metrics on the dashboard focused on standard work, with an overall 94.6% compliance to the workflow. Conclusion: A unique dashboard that incorporated metrics to monitor 3 important areas was successfully implemented to improve the effectiveness of CIVAS pharmacy. These metrics helped pharmacy to monitor progress in real-time, allowing attainment of production goals and fostering continuous quality improvement through LEAN work. PMID:28439134
Developments in Seismic Data Quality Assessment Using MUSTANG at the IRIS DMC
NASA Astrophysics Data System (ADS)
Sharer, G.; Keyson, L.; Templeton, M. E.; Weertman, B.; Smith, K.; Sweet, J. R.; Tape, C.; Casey, R. E.; Ahern, T.
2017-12-01
MUSTANG is the automated data quality metrics system at the IRIS Data Management Center (DMC), designed to help characterize data and metadata "goodness" across the IRIS data archive, which holds 450 TB of seismic and related earth science data spanning the past 40 years. It calculates 46 metrics ranging from sample statistics and miniSEED state-of-health flag counts to Power Spectral Densities (PSDs) and Probability Density Functions (PDFs). These quality measurements are easily and efficiently accessible to users through the use of web services, which allows users to make requests not only by station and time period but also to filter the results according to metric values that match a user's data requirements. Results are returned in a variety of formats, including XML, JSON, CSV, and text. In the case of PSDs and PDFs, results can also be retrieved as plot images. In addition, there are several user-friendly client tools available for exploring and visualizing MUSTANG metrics: LASSO, MUSTANG Databrowser, and MUSTANGular. Over the past year we have made significant improvements to MUSTANG. We have nearly complete coverage over our archive for broadband channels with sample rates of 20-200 sps. With this milestone achieved, we are now expanding to include higher sample rate, short-period, and strong-motion channels. Data availability metrics will soon be calculated when a request is made which guarantees that the information reflects the current state of the archive and also allows for more flexibility in content. For example, MUSTANG will be able to return a count of gaps for any arbitrary time period instead of being limited to 24 hour spans. We are also promoting the use of data quality metrics beyond the IRIS archive through our recent release of ISPAQ, a Python command-line application that calculates MUSTANG-style metrics for users' local miniSEED files or for any miniSEED data accessible through FDSN-compliant web services. Finally, we will explore how researchers are using MUSTANG in real-world situations to select data, improve station data quality, anticipate station outages and servicing, and characterize site noise and environmental conditions.
Content-Aware Video Adaptation under Low-Bitrate Constraint
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Ho; Chen, Yi-Wen; Chen, Hua-Tsung; Chou, Kuan-Hung; Lee, Suh-Yin
2007-12-01
With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB-) weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Secure and Efficient Reactive Video Surveillance for Patient Monitoring.
Braeken, An; Porambage, Pawani; Gurtov, Andrei; Ylianttila, Mika
2016-01-02
Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient's side.
Secure and Efficient Reactive Video Surveillance for Patient Monitoring
Braeken, An; Porambage, Pawani; Gurtov, Andrei; Ylianttila, Mika
2016-01-01
Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient’s side. PMID:26729130
Tuong, William; Armstrong, April W
2015-02-16
Increasing participant satisfaction with health interventions can improve compliance with recommended health behaviors and lead to better health outcomes. However, factors that influence participant satisfaction have not been well studied in dermatology-specific behavioral health interventions. We sought to assess participant satisfaction of either an appearance-based educational video or a health-based educational video promoting sunscreen use along dimensions of usefulness of educational content, message appeal, and presentation quality. In a randomized controlled trial, participants were randomized 1:1 to view an appearance-based video or a health-based video. After six weeks, participant satisfaction with the educational videos was assessed. Fifty high school students were enrolled and completed the study. Participant satisfaction ratings were assessed using a pre-tested 10-point assessment scale. The participants rated the usefulness of the appearance-based video (8.1 ± 1.2) significantly higher than the health-based video (6.4 ± 1.4, p<0.001). The message appeal of the appearance-based video (8.3 ± 1.0) was also significantly higher than the health-based video (6.6 ± 1.6, p<0.001). The presentation quality rating was similar between the appearance-based video (7.8 ± 1.3) and the health-based video (8.1 ± 1.3), p=0.676. Adolescents rated the appearance-based video higher than the health-based video in terms of usefulness of educational content and message appeal.
Descriptive analysis of YouTube music therapy videos.
Gooding, Lori F; Gregory, Dianne
2011-01-01
The purpose of this study was to conduct a descriptive analysis of music therapy-related videos on YouTube. Preliminary searches using the keywords music therapy, music therapy session, and "music therapy session" resulted in listings of 5000, 767, and 59 videos respectively. The narrowed down listing of 59 videos was divided between two investigators and reviewed in order to determine their relationship to actual music therapy practice. A total of 32 videos were determined to be depictions of music therapy sessions. These videos were analyzed using a 16-item investigator-created rubric that examined both video specific information and therapy specific information. Results of the analysis indicated that audio and visual quality was adequate, while narrative descriptions and identification information were ineffective in the majority of the videos. The top 5 videos (based on the highest number of viewings in the sample) were selected for further analysis in order to investigate demonstration of the Professional Level of Practice Competencies set forth in the American Music Therapy Association (AMTA) Professional Competencies (AMTA, 2008). Four of the five videos met basic competency criteria, with the quality of the fifth video precluding evaluation of content. Of particular interest is the fact that none of the videos included credentialing information. Results of this study suggest the need to consider ways to ensure accurate dissemination of music therapy-related information in the YouTube environment, ethical standards when posting music therapy session videos, and the possibility of creating AMTA standards for posting music therapy related video.
2014-06-01
increases quality of life , which, in turn, leads to better retention metrics; better retention metrics translate into higher experience levels...the quality of life for Airmen, particularly two-parent military families assigned to different AEFs.46 Cognizant of an already high operations...a desire to achieve the highest quality of life for Airmen. Ryan settled on a 1:4 AEF dwell ratio to ensure Airmen were not away from home- station
Getting started on metrics - Jet Propulsion Laboratory productivity and quality
NASA Technical Reports Server (NTRS)
Bush, M. W.
1990-01-01
A review is presented to describe the effort and difficulties of reconstructing fifteen years of JPL software history. In 1987 the collection and analysis of project data were started with the objective of creating laboratory-wide measures of quality and productivity for software development. As a result of this two-year Software Product Assurance metrics study, a rough measurement foundation for software productivity and software quality, and an order-of-magnitude quantitative baseline for software systems and subsystems are now available.
47 CFR Appendix - Technical Appendix 1
Code of Federal Regulations, 2010 CFR
2010-10-01
... display program material that has been encoded in any and all of the video formats contained in Table A3... frame rate of the transmitted video format. 2. Output Formats Equipment shall support 4:3 center cut-out... for composite video (yellow). Output shall produce video with ITU-R BT.500-11 quality scale of Grade 4...
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
Real time biometric surveillance with gait recognition
NASA Astrophysics Data System (ADS)
Mohapatra, Subasish; Swain, Anisha; Das, Manaswini; Mohanty, Subhadarshini
2018-04-01
Bio metric surveillance has become indispensable for every system in the recent years. The contribution of bio metric authentication, identification, and screening purposes are widely used in various domains for preventing unauthorized access. A large amount of data needs to be updated, segregated and safeguarded from malicious software and misuse. Bio metrics is the intrinsic characteristics of each individual. Recently fingerprints, iris, passwords, unique keys, and cards are commonly used for authentication purposes. These methods have various issues related to security and confidentiality. These systems are not yet automated to provide the safety and security. The gait recognition system is the alternative for overcoming the drawbacks of the recent bio metric based authentication systems. Gait recognition is newer as it hasn't been implemented in the real-world scenario so far. This is an un-intrusive system that requires no knowledge or co-operation of the subject. Gait is a unique behavioral characteristic of every human being which is hard to imitate. The walking style of an individual teamed with the orientation of joints in the skeletal structure and inclinations between them imparts the unique characteristic. A person can alter one's own external appearance but not skeletal structure. These are real-time, automatic systems that can even process low-resolution images and video frames. In this paper, we have proposed a gait recognition system and compared the performance with conventional bio metric identification systems.
Qualitative analysis of Parkinson's disease information on social media: the case of YouTube™.
Al-Busaidi, Ibrahim Saleh; Anderson, Tim J; Alamri, Yassar
2017-09-01
There is a paucity of data pertaining to the usefulness of information presented on social media platforms on chronic neuropsychiatric conditions such as Parkinson's disease (PD). The aim of this study was to examine the quality of YouTube™ videos that deliver general information on PD and the availability and design of instructional videos addressing the caregiving role in PD. YouTube™ was searched using the keyword "Parkinson's disease" for relevant videos. Videos were assessed for usefulness and accuracy based on pre-defined criteria. Data on video characteristics including total viewership, duration, ratings, and source of videos were collated. Instructional PD videos that addressed the role of caregivers were examined closely for the design and scope of instructional content. A total of 100 videos met the inclusion criteria. Just under a third of videos (28%) was uploaded by trusted academic organisations. Overall, 15% of PD videos were found to be somewhat useful and only 4% were assessed as providing very useful PD information; 3% of surveyed videos were misleading. The mean number of video views (regardless of video source) was not significantly different between the different video ratings ( p = 0.86). Although personal videos trended towards being less useful than videos from academic organisations, this association was not statistically significant ( p = 0.13). To our knowledge, this is the first study to assess the usefulness of PD information on the largest video-sharing website, YouTube™. In general, the overall quality of information presented in the videos screened was mediocre. Viewership of accurate vs. misleading information was, however, very similar. Therefore, healthcare providers should direct PD patients and their families to the resources that provide reliable and accurate information.
YouTube as a source of clinical skills education.
Duncan, Ian; Yarwood-Ross, Lee; Haigh, Carol
2013-12-01
YouTube may be viewed as a great 'time waster' but a significant amount of educative material can be found if the user is carefully selective. Interestingly, the growth of educational video on YouTube is closely associated to video viewership which increased from 22% to 38% between 2007 and 2009. This paper describes the findings of a study undertaken to assess the quality of clinical skills videos available on the video sharing site YouTube. This study evaluated 100 YouTube sites, approximately 1500 min or 25 h worth of content across 10 common clinical skill related topics. In consultation with novice practitioners, nurses in the first year of their university diploma programme, we identified ten common clinical skills that typically students would explore in more detail or would wish to revisit outside of the formal teaching environment. For each of these topics, we viewed each of the first 10 videos on the YouTube website. The videos were evaluated using a modification of the criteria outlined in Evaluation of Video Media Guideline. The topic with the biggest number of both postings and views was cardiopulmonary resuscitation and more specialist, nursing or health related topics such as managing a syringe driver or undertaking a pain assessment had less video content and lower numbers of viewers. Only one video out of the 100 analysed could be categorised as 'good' and that was the one in the Cannulation section. 60% of the CPR and venepuncture content was categorised as 'satisfactory'. There is a clear need for the quality of YouTube videos to be subjected to a rigorous evaluation. Lecturers should be more proactive in recommending suitable YouTube material as supplementary learning materials after appropriately checking for quality. Copyright © 2013 Elsevier Ltd. All rights reserved.
The influence of motion quality on responses towards video playback stimuli.
Ware, Emma; Saunders, Daniel R; Troje, Nikolaus F
2015-05-11
Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR), are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia) response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p) frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour. © 2015. Published by The Company of Biologists Ltd.
Open-source telemedicine platform for wireless medical video communication.
Panayides, A; Eleftheriou, I; Pantziaris, M
2013-01-01
An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.
Open-Source Telemedicine Platform for Wireless Medical Video Communication
Panayides, A.; Eleftheriou, I.; Pantziaris, M.
2013-01-01
An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082
Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging
Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.
2013-01-01
Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895
Simultaneous analysis and quality assurance for diffusion tensor imaging.
Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A
2013-01-01
Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.
Comparing image quality of print-on-demand books and photobooks from web-based vendors
NASA Astrophysics Data System (ADS)
Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell
2010-01-01
Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.
High-throughput monitoring of major cell functions by means of lensfree video microscopy
Kesavan, S. Vinjimore; Momey, F.; Cioni, O.; David-Watine, B.; Dubrulle, N.; Shorte, S.; Sulpice, E.; Freida, D.; Chalmond, B.; Dinten, J. M.; Gidrol, X.; Allier, C.
2014-01-01
Quantification of basic cell functions is a preliminary step to understand complex cellular mechanisms, for e.g., to test compatibility of biomaterials, to assess the effectiveness of drugs and siRNAs, and to control cell behavior. However, commonly used quantification methods are label-dependent, and end-point assays. As an alternative, using our lensfree video microscopy platform to perform high-throughput real-time monitoring of cell culture, we introduce specifically devised metrics that are capable of non-invasive quantification of cell functions such as cell-substrate adhesion, cell spreading, cell division, cell division orientation and cell death. Unlike existing methods, our platform and associated metrics embrace entire population of thousands of cells whilst monitoring the fate of every single cell within the population. This results in a high content description of cell functions that typically contains 25,000 – 900,000 measurements per experiment depending on cell density and period of observation. As proof of concept, we monitored cell-substrate adhesion and spreading kinetics of human Mesenchymal Stem Cells (hMSCs) and primary human fibroblasts, we determined the cell division orientation of hMSCs, and we observed the effect of transfection of siCellDeath (siRNA known to induce cell death) on hMSCs and human Osteo Sarcoma (U2OS) Cells. PMID:25096726
Remote Video Auditing in the Surgical Setting.
Pedersen, Anne; Getty Ritter, Elizabeth; Beaton, Megan; Gibbons, David
2017-02-01
Remote video auditing, a method first adopted by the food preparation industry, was later introduced to the health care industry as a novel approach to improving hand hygiene practices. This strategy yielded tremendous and sustained improvement, causing leaders to consider the potential effects of such technology on the complex surgical environment. This article outlines the implementation of remote video auditing and the first year of activity, outcomes, and measurable successes in a busy surgery department in the eastern United States. A team of anesthesia care providers, surgeons, and OR personnel used low-resolution cameras, large-screen displays, and cell phone alerts to make significant progress in three domains: application of the Universal Protocol for preventing wrong site, wrong procedure, wrong person surgery; efficiency metrics; and cleaning compliance. The use of cameras with real-time auditing and results-sharing created an environment of continuous learning, compliance, and synergy, which has resulted in a safer, cleaner, and more efficient OR. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Strohmeier, Dominik; Kunze, Kristina; Göbel, Klemens; Liebetrau, Judith
2013-01-01
Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing QoE differences.
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birchfield, Adam; Schweitzer, Eran; Athari, Mir
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids
Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...
2017-08-19
Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less
Improving wavelet denoising based on an in-depth analysis of the camera color processing
NASA Astrophysics Data System (ADS)
Seybold, Tamara; Plichta, Mathias; Stechele, Walter
2015-02-01
While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.
The Assignment of Scale to Object-Oriented Software Measures
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; Weistroffer, H. Roland; Coppins, Richard J.
1997-01-01
In order to improve productivity (and quality), measurement of specific aspects of software has become imperative. As object oriented programming languages have become more widely used, metrics designed specifically for object-oriented software are required. Recently a large number of new metrics for object- oriented software has appeared in the literature. Unfortunately, many of these proposed metrics have not been validated to measure what they purport to measure. In this paper fifty (50) of these metrics are analyzed.
Algal bioassessment metrics for wadeable streams and rivers of Maine, USA
Danielson, Thomas J.; Loftin, Cynthia S.; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth
2011-01-01
Many state water-quality agencies use biological assessment methods based on lotic fish and macroinvertebrate communities, but relatively few states have incorporated algal multimetric indices into monitoring programs. Algae are good indicators for monitoring water quality because they are sensitive to many environmental stressors. We evaluated benthic algal community attributes along a landuse gradient affecting wadeable streams and rivers in Maine, USA, to identify potential bioassessment metrics. We collected epilithic algal samples from 193 locations across the state. We computed weighted-average optima for common taxa for total P, total N, specific conductance, % impervious cover, and % developed watershed, which included all land use that is no longer forest or wetland. We assigned Maine stream tolerance values and categories (sensitive, intermediate, tolerant) to taxa based on their optima and responses to watershed disturbance. We evaluated performance of algal community metrics used in multimetric indices from other regions and novel metrics based on Maine data. Metrics specific to Maine data, such as the relative richness of species characterized as being sensitive in Maine, were more correlated with % developed watershed than most metrics used in other regions. Few community-structure attributes (e.g., species richness) were useful metrics in Maine. Performance of algal bioassessment models would be improved if metrics were evaluated with attributes of local data before inclusion in multimetric indices or statistical models. ?? 2011 by The North American Benthological Society.
Naidu, Ramana K.
2018-01-01
Abstract Background: Chronic pain associated with serious illnesses is having a major impact on population health in the United States. Accountability for high quality care for community-dwelling patients with serious illnesses requires selection of metrics that capture the burden of chronic pain whose treatment may be enhanced or complicated by opioid use. Objective: Our aim was to evaluate options for assessing pain in seriously ill community dwelling adults, to discuss the use/abuse of opioids in individuals with chronic pain, and to suggest pain and opioid use metrics that can be considered for screening and evaluation of patient responses and quality care. Design: Structured literature review. Measurements: Evaluation of pain and opioid use assessment metrics and measures for their potential usefulness in the community. Results: Several pain and opioid assessment instruments are available for consideration. Yet, no one pain instrument has been identified as “the best” to assess pain in seriously ill community-dwelling patients. Screening tools exist that are specific to the assessment of risk in opioid management. Opioid screening can assess risk based on substance use history, general risk taking, and reward-seeking behavior. Conclusions: Accountability for high quality care for community-dwelling patients requires selection of metrics that will capture the burden of chronic pain and beneficial use or misuse of opioids. Future research is warranted to identify, modify, or develop instruments that contain important metrics, demonstrate a balance between sensitivity and specificity, and address patient preferences and quality outcomes. PMID:29091525
Secured web-based video repository for multicenter studies
Yan, Ling; Hicks, Matt; Winslow, Korey; Comella, Cynthia; Ludlow, Christy; Jinnah, H. A; Rosen, Ami R; Wright, Laura; Galpern, Wendy R; Perlmutter, Joel S
2015-01-01
Background We developed a novel secured web-based dystonia video repository for the Dystonia Coalition, part of the Rare Disease Clinical Research network funded by the Office of Rare Diseases Research and the National Institute of Neurological Disorders and Stroke. A critical component of phenotypic data collection for all projects of the Dystonia Coalition includes a standardized video of each participant. We now describe our method for collecting, serving and securing these videos that is widely applicable to other studies. Methods Each recruiting site uploads standardized videos to a centralized secured server for processing to permit website posting. The streaming technology used to view the videos from the website does not allow downloading of video files. With appropriate institutional review board approval and agreement with the hosting institution, users can search and view selected videos on the website using customizable, permissions-based access that maintains security yet facilitates research and quality control. Results This approach provides a convenient platform for researchers across institutions to evaluate and analyze shared video data. We have applied this methodology for quality control, confirmation of diagnoses, validation of rating scales, and implementation of new research projects. Conclusions We believe our system can be a model for similar projects that require access to common video resources. PMID:25630890
The Use of Smart Glasses for Surgical Video Streaming.
Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu
2017-04-01
Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.
Teaching Corporate Culture Using Interactive Video Training.
ERIC Educational Resources Information Center
Gardner, P. R.
The Westinghouse Hanford Company Total Quality Program includes the development of Hanford General Employee Training (HGET), an interactive video course. The commitment to total quality is developed in both new and requalifying employees by requiring them to make positive choices when confronted with real life scenarios showing violations of…
The emerging High Efficiency Video Coding standard (HEVC)
NASA Astrophysics Data System (ADS)
Raja, Gulistan; Khan, Awais
2013-12-01
High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC.
Technology survey on video face tracking
NASA Astrophysics Data System (ADS)
Zhang, Tong; Gomes, Herman Martins
2014-03-01
With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.
Video Use in Teacher Education: A Survey of Teacher-Educators' Practices across Disciplines
ERIC Educational Resources Information Center
Arya, Poonam; Christ, Tanya; Chiu, Ming Ming
2016-01-01
Video methods utilize tenets of high quality teacher education and support education students' learning and application of learning to teaching practices. However, how frequently video is used in teacher education, and in what ways is unknown. Therefore, this study used survey data to identify the extent to which 94 teacher-educators used video in…
Requirement Metrics for Risk Identification
NASA Technical Reports Server (NTRS)
Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence
1996-01-01
The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.
NASA Astrophysics Data System (ADS)
Palma, V.; Carli, M.; Neri, A.
2011-02-01
In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.
NASA Astrophysics Data System (ADS)
Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric
1991-05-01
Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).
2017-01-01
This study was conducted to evaluate the performance and reach of YouTube videos on physical examinations made by Spanish university students. We analyzed performance metrics for 4 videos on physical examinations in Spanish that were created by medical students at Miguel Hernández University (Elche, Spain) and are available on YouTube, on the following topics: the head and neck (7:30), the cardiovascular system (7:38), the respiratory system (13:54), and the abdomen (11:10). We used the Analytics application offered by the YouTube platform to analyze the reach of the videos from the upload date (February 17, 2015) to July 28, 2017 (2 years, 5 months, and 11 days). The total number of views, length of watch-time, and the mean view duration for the 4 videos were, respectively: 164,403 views (mean, 41,101 views; range, 12,389 to 94,573 views), 425,888 minutes (mean, 106,472 minutes; range, 37,889 to 172,840 minutes), and 2:56 minutes (range, 1:49 to 4:03 minutes). Mexico was the most frequent playback location, followed by Spain, Colombia, and Venezuela. Uruguay, Ecuador, Mexico, and Puerto Rico had the most views per 100,000 population. Spanish-language tutorials are an alternative tool for teaching physical examination skills to students whose first language is not English. The videos were especially popular in Uruguay, Ecuador, and Mexico. PMID:29278903
Action change detection in video using a bilateral spatial-temporal constraint
NASA Astrophysics Data System (ADS)
Tian, Jing; Chen, Li
2016-08-01
Action change detection in video aims to detect action discontinuity in video. The silhouettes-based features are desirable for action change detection. This paper studies the problem of silhouette-quality assessment. For that, a non-reference approach without the need for ground truth is proposed in this paper to evaluate the quality of silhouettes, by exploiting both the boundary contrast of the silhouettes in the spatial domain and the consistency of the silhouettes in the temporal domain. This is in contrast to that either only spatial information or only temporal information of silhouettes is exploited in conventional approaches. Experiments are conducted using artificially generated degraded silhouettes to show that the proposed approach outperforms conventional approaches to achieve more accurate quality assessment. Furthermore, experiments are performed to show that the proposed approach is able to improve the accuracy performance of conventional action change approaches in two human action video data-sets. The average runtime of the proposed approach for Weizmann action video data-set is 0.08 second for one frame using Matlab programming language. It is computationally efficient and potential to real-time implementations.
Video quality of 3G videophones for telephone cardiopulmonary resuscitation.
Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander
2008-01-01
We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.
Lee, Hyun-Ho; Lee, Sang-Kwon
2009-09-01
Booming sound is one of the important sounds in a passenger car. The aim of the paper is to develop the objective evaluation method of interior booming sound. The development method is based on the sound metrics and ANN (artificial neural network). The developed method is called the booming index. Previous work maintained that booming sound quality is related to loudness and sharpness--the sound metrics used in psychoacoustics--and that the booming index is developed by using the loudness and sharpness for a signal within whole frequency between 20 Hz and 20 kHz. In the present paper, the booming sound quality was found to be effectively related to the loudness at frequencies below 200 Hz; thus the booming index is updated by using the loudness of the signal filtered by the low pass filter at frequency under 200 Hz. The relationship between the booming index and sound metric is identified by an ANN. The updated booming index has been successfully applied to the objective evaluation of the booming sound quality of mass-produced passenger cars.
Wang, Wei; Wang, Chunqiu; Zhao, Min
2014-03-01
To ease the burdens on the hospitalization capacity, an emerging swallowable-capsule technology has evolved to serve as a remote gastrointestinal (GI) disease examination technique with the aid of the wireless body sensor network (WBSN). Secure multimedia transmission in such a swallowable-capsule-based WBSN faces critical challenges including energy efficiency and content quality guarantee. In this paper, we propose a joint resource allocation and stream authentication scheme to maintain the best possible video quality while ensuring security and energy efficiency in GI-WBSNs. The contribution of this research is twofold. First, we establish a unique signature-hash (S-H) diversity approach in the authentication domain to optimize video authentication robustness and the authentication bit rate overhead over a wireless channel. Based on the full exploration of S-H authentication diversity, we propose a new two-tier signature-hash (TTSH) stream authentication scheme to improve the video quality by reducing authentication dependence overhead while protecting its integrity. Second, we propose to combine this authentication scheme with a unique S-H oriented unequal resource allocation (URA) scheme to improve the energy-distortion-authentication performance of wireless video delivery in GI-WBSN. Our analysis and simulation results demonstrate that the proposed TTSH with URA scheme achieves considerable gain in both authenticated video quality and energy efficiency.
Baek, Sunyong; Im, Sun Ju; Lee, Sun Hee; Kam, Beesung; Yune, So Joung; Lee, Sang Soo; Lee, Jung A; Lee, Yuna; Lee, Sang Yeoup
2011-12-01
The lecture is a technique for delivering knowledge and information cost-effectively to large medical classes in medical education. The aim of this study was to analyze teaching quality, based on triangle analysis of video recordings of medical lectures, to strengthen teaching competency in medical school. The subjects of this study were 13 medical professors who taught 1st- and 2nd-year medical students and agreed to a triangle analysis of video recordings of their lectures. We first performed triangle analysis, which consisted of a professional analysis of video recordings, self-assessment by teaching professors, and feedback from students, and the data were crosschecked by five school consultants for reliability and consistency. Most of the distress that teachers experienced during the lecture occurred in uniform teaching environments, such as larger lecture classes. Larger lectures that primarily used PowerPoint as a medium to deliver information effected poor interaction with students. Other distressing factors in the lecture were personal characteristics and lack of strategic faculty development. Triangle analysis of video recordings of medical lectures gives teachers an opportunity and motive to improve teaching quality. Faculty development and various improvement strategies, based on this analysis, are expected to help teachers succeed as effective, efficient, and attractive lecturers while improving the quality of larger lecture classes.
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
Evaluation of ride quality prediction methods for operational military helicopters
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Clevenson, S. A.; Hollenbaugh, D. D.
1984-01-01
The results of a simulator study conducted to compare and validate various ride quality prediction methods for use in assessing passenger/crew ride comfort within helicopters are presented. Included are results quantifying 35 helicopter pilots' discomfort responses to helicopter interior noise and vibration typical of routine flights, assessment of various ride quality metrics including the NASA ride comfort model, and examination of possible criteria approaches. Results of the study indicated that crew discomfort results from a complex interaction between vibration and interior noise. Overall measures such as weighted or unweighted root-mean-square acceleration level and A-weighted noise level were not good predictors of discomfort. Accurate prediction required a metric incorporating the interactive effects of both noise and vibration. The best metric for predicting crew comfort to the combined noise and vibration environment was the NASA discomfort index.
Waddell, George; Williamon, Aaron
2017-01-01
Judgments of music performance quality are commonly employed in music practice, education, and research. However, previous studies have demonstrated the limited reliability of such judgments, and there is now evidence that extraneous visual, social, and other “non-musical” features can unduly influence them. The present study employed continuous measurement techniques to examine how the process of forming a music quality judgment is affected by the manipulation of temporally specific visual cues. Video footage comprising an appropriate stage entrance and error-free performance served as the standard condition (Video 1). This footage was manipulated to provide four additional conditions, each identical save for a single variation: an inappropriate stage entrance (Video 2); the presence of an aural performance error midway through the piece (Video 3); the same error accompanied by a negative facial reaction by the performer (Video 4); the facial reaction with no corresponding aural error (Video 5). The participants were 53 musicians and 52 non-musicians (N = 105) who individually assessed the performance quality of one of the five randomly assigned videos via a digital continuous measurement interface and headphones. The results showed that participants viewing the “inappropriate” stage entrance made judgments significantly more quickly than those viewing the “appropriate” entrance, and while the poor entrance caused significantly lower initial scores among those with musical training, the effect did not persist long into the performance. The aural error caused an immediate drop in quality judgments that persisted to a lower final score only when accompanied by the frustrated facial expression from the pianist; the performance error alone caused a temporary drop only in the musicians' ratings, and the negative facial reaction alone caused no reaction regardless of participants' musical experience. These findings demonstrate the importance of visual information in forming evaluative and aesthetic judgments in musical contexts and highlight how visual cues dynamically influence those judgments over time. PMID:28487662
Veterinary students' usage and perception of video teaching resources
2011-01-01
Background The purpose of our study was to use a student-centred approach to develop an online video learning resource (called 'Moo Tube') at the School of Veterinary Medicine and Science, University of Nottingham, UK and also to provide guidance for other academics in the School wishing to develop a similar resource in the future. Methods A focus group in the format of the nominal group technique was used to garner the opinions of 12 undergraduate students (3 from year-1, 4 from year-2 and 5 from year-3). Students generated lists of items in response to key questions, these responses were thematically analysed to generate key themes which were compared between the different year groups. The number of visits to 'Moo Tube' before and after an objective structured practical examination (OSPE) was also analysed to provide data on video usage. Results Students highlighted a number of strengths of video resources which can be grouped into four overarching themes: (1) teaching enhancement, (2) accessibility, (3) technical quality and (4) video content. Of these themes, students rated teaching enhancement and accessibility most highly. Video usage was seen to significantly increase (P < 0.05) prior to an examination and significantly decrease (P < 0.05) following the examination. Conclusions The students had a positive perception of video usage in higher education. Video usage increases prior to practical examinations. Image quality was a greater concern with year-3 students than with either year-1 or 2 students but all groups highlighted the following as important issues: i) good sound quality, ii) accessibility, including location of videos within electronic libraries, and iii) video content. Based on the findings from this study, guidelines are suggested for those developing undergraduate veterinary videos. We believe that many aspects of our list will have resonance in other areas of medicine education and higher education. PMID:21219639
Enhance Video Film using Retnix method
NASA Astrophysics Data System (ADS)
Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.
2018-05-01
An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.