Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H
2015-06-01
To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.
Mraity, Hussien A A B; England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter
2016-01-01
The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.
England, Andrew; Cassidy, Simon; Eachus, Peter; Dominguez, Alejandro; Hogg, Peter
2016-01-01
Objective: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. Methods: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. Results: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). Conclusion: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. Advances in knowledge: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality. PMID:26943836
Image gathering and restoration - Information and visual quality
NASA Technical Reports Server (NTRS)
Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.
1989-01-01
A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.
CLINICAL AUDIT OF IMAGE QUALITY IN RADIOLOGY USING VISUAL GRADING CHARACTERISTICS ANALYSIS.
Tesselaar, Erik; Dahlström, Nils; Sandborg, Michael
2016-06-01
The aim of this work was to assess whether an audit of clinical image quality could be efficiently implemented within a limited time frame using visual grading characteristics (VGC) analysis. Lumbar spine radiography, bedside chest radiography and abdominal CT were selected. For each examination, images were acquired or reconstructed in two ways. Twenty images per examination were assessed by 40 radiology residents using visual grading of image criteria. The results were analysed using VGC. Inter-observer reliability was assessed. The results of the visual grading analysis were consistent with expected outcomes. The inter-observer reliability was moderate to good and correlated with perceived image quality (r(2) = 0.47). The median observation time per image or image series was within 2 min. These results suggest that the use of visual grading of image criteria to assess the quality of radiographs provides a rapid method for performing an image quality audit in a clinical environment. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Retinal Image Quality During Accommodation
López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.
2013-01-01
Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386
Retinal image quality during accommodation.
López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N
2013-07-01
We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Comprehensive model for predicting perceptual image quality of smart mobile devices.
Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng
2015-01-01
An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.
The medium and the message: a revisionist view of image quality
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2010-02-01
In his book "Understanding Media" social theorist Marshall McLuhan declared: "The medium is the message." The thesis of this paper is that with respect to image quality, imaging system developers have taken McLuhan's dictum too much to heart. Efforts focus on improving the technical specifications of the media (e.g. dynamic range, color gamut, resolution, temporal response) with little regard for the visual messages the media will be used to communicate. We present a series of psychophysical studies that investigate the visual system's ability to "see through" the limitations of imaging media to perceive the messages (object and scene properties) the images represent. The purpose of these studies is to understand the relationships between the signal characteristics of an image and the fidelity of the visual information the image conveys. The results of these studies provide a new perspective on image quality that shows that images that may be very different in "quality", can be visually equivalent as realistic representations of objects and scenes.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Naturalness and interestingness of test images for visual quality evaluation
NASA Astrophysics Data System (ADS)
Halonen, Raisa; Westman, Stina; Oittinen, Pirkko
2011-01-01
Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.
Assessment of visual landscape quality using IKONOS imagery.
Ozkan, Ulas Yunus
2014-07-01
The assessment of visual landscape quality is of importance to the management of urban woodlands. Satellite remote sensing may be used for this purpose as a substitute for traditional survey techniques that are both labour-intensive and time-consuming. This study examines the association between the quality of the perceived visual landscape in urban woodlands and texture measures extracted from IKONOS satellite data, which features 4-m spatial resolution and four spectral bands. The study was conducted in the woodlands of Istanbul (the most important element of urban mosaic) lying along both shores of the Bosporus Strait. The visual quality assessment applied in this study is based on the perceptual approach and was performed via a survey of expressed preferences. For this purpose, representative photographs of real scenery were used to elicit observers' preferences. A slide show comprising 33 images was presented to a group of 153 volunteers (all undergraduate students), and they were asked to rate the visual quality of each on a 10-point scale (1 for very low visual quality, 10 for very high). Average visual quality scores were calculated for landscape. Texture measures were acquired using the two methods: pixel-based and object-based. Pixel-based texture measures were extracted from the first principle component (PC1) image. Object-based texture measures were extracted by using the original four bands. The association between image texture measures and perceived visual landscape quality was tested via Pearson's correlation coefficient. The analysis found a strong linear association between image texture measures and visual quality. The highest correlation coefficient was calculated between standard deviation of gray levels (SDGL) (one of the pixel-based texture measures) and visual quality (r = 0.82, P < 0.05). The results showed that perceived visual quality of urban woodland landscapes can be estimated by using texture measures extracted from satellite data in combination with appropriate modelling techniques.
Information theoretical assessment of visual communication with subband coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.
1994-09-01
A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.
Visual quality analysis for images degraded by different types of noise
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Ieremeyev, Oleg I.; Egiazarian, Karen O.; Astola, Jaakko T.
2013-02-01
Modern visual quality metrics take into account different peculiarities of the Human Visual System (HVS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNRHVS- M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HVS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HVS-metrics.
Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni
2006-08-01
Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
On pictures and stuff: image quality and material appearance
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2014-02-01
Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei
2018-01-01
Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.
Implementation of dictionary pair learning algorithm for image quality improvement
NASA Astrophysics Data System (ADS)
Vimala, C.; Aruna Priya, P.
2018-04-01
This paper proposes an image denoising on dictionary pair learning algorithm. Visual information is transmitted in the form of digital images is becoming a major method of communication in the modern age, but the image obtained after transmissions is often corrupted with noise. The received image needs processing before it can be used in applications. Image denoising involves the manipulation of the image data to produce a visually high quality image.
Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa
2013-01-01
Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796
Comparative Study of the MTFA, ICS, and SQRI Image Quality Metrics for Visual Display Systems
1991-09-01
reasonable image quality predictions across select display and viewing condition parameters. 101 6.0 REFERENCES American National Standard for Human Factors Engineering of ’ Visual Display Terminal Workstations . ANSI
A visual grading study for different administered activity levels in bone scintigraphy.
Gustafsson, Agnetha; Karlsson, Henrik; Nilsson, Kerstin A; Geijer, Håkan; Olsson, Anna
2015-05-01
The aim of the study is to assess the administered activity levels versus visual-based image quality using visual grading regression (VGR) including an assessment of the newly stated image criteria for whole-body bone scintigraphy. A total of 90 patients was included and grouped in three levels of administered activity: 400, 500 and 600 MBq. Six clinical image criteria regarding image quality was formulated by experienced nuclear medicine physicians. Visual grading was performed in all images, where three physicians rated the fulfilment of the image criteria on a four-step ordinal scale. The results were analysed using VGR. A count analysis was also made where the total number of counts in both views was registered. The administered activity of 600 MBq gives significantly better image quality than 400 MBq in five of six criteria (P<0·05). Comparing the administered activity of 600 MBq to 500 MBq, four criteria of six show significantly better image quality (P<0·05). The administered activity of 500 MBq gives no significantly better image quality than 400 Mbq (P<0·05). The count analysis shows that none of the three levels of administrated activity fulfil the recommendations by the EANM. There was a significant improvement in perceived image quality using an activity level of 600 MBq compared to lower activity levels in whole-body bone scintigraphy for the gamma camera equipment end set-up used in this study. This type of visual-based grading study seems to be a valuable tool and easy to implement in the clinical environment. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.
Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida
2016-06-28
During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.
Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong
2016-06-29
The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.
An evaluation of the use of oral contrast media in abdominopelvic CT.
Buttigieg, Erica Lauren; Grima, Karen Borg; Cortis, Kelvin; Soler, Sandro Galea; Zarb, Francis
2014-11-01
To evaluate the diagnostic efficacy of different oral contrast media (OCM) for abdominopelvic CT examinations performed for follow-up general oncological indications. The objectives were to establish anatomical image quality criteria for abdominopelvic CT; use these criteria to evaluate and compare image quality using positive OCM, neutral OCM and no OCM; and evaluate possible benefits for the medical imaging department. Forty-six adult patients attending a follow-up abdominopelvic CT for general oncological indications and who had a previous abdominopelvic CT with positive OCM (n = 46) were recruited and prospectively placed into either the water (n = 25) or no OCM (n = 21) group. Three radiologists performed absolute visual grading analysis (VGA) to assess image quality by grading the fulfilment of 24 anatomical image quality criteria. Visual grading characteristics (VGC) analysis of the data showed comparable image quality with regards to reproduction of abdominal structures, bowel discrimination, presence of artefacts, and visualization of the amount of intra-abdominal fat for the three OCM protocols. All three OCM protocols provided similar image quality for follow-up abdominopelvic CT for general oncological indications. • Positive oral contrast media are routinely used for abdominopelvic multidetector computed tomography • Experimental study comparing image quality using three different oral contrast materials • Three different oral contrast materials result in comparable CT image quality • Benefits for patients and medical imaging department.
Assessment of visual communication by information theory
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.
1994-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
Nakanishi, Rine; Sankaran, Sethuraman; Grady, Leo; Malpeso, Jenifer; Yousfi, Razik; Osawa, Kazuhiro; Ceponiene, Indre; Nazarat, Negin; Rahmani, Sina; Kissel, Kendall; Jayawardena, Eranthi; Dailing, Christopher; Zarins, Christopher; Koo, Bon-Kwon; Min, James K; Taylor, Charles A; Budoff, Matthew J
2018-03-23
Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit Ms
2018-01-01
To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p =0.005). Postoperative refractive cylinder was -0.89±0.35 D in group I and -0.64±0.36 D in group II ( p =0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio ( p <0.05) and modulation transfer function (MTF) ( p <0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) ( p <0.05). Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment.
Titiyal, Jeewan S; Kaur, Manpreet; Jose, Cijin P; Falera, Ruchita; Kinkar, Ashutosh; Bageshwar, Lalit MS
2018-01-01
Purpose To compare toric intraocular lens (IOL) alignment assisted by image-guided surgery or manual marking methods and its impact on visual quality. Patients and methods This prospective comparative study enrolled 80 eyes with cataract and astigmatism ≥1.5 D to undergo phacoemulsification with toric IOL alignment by manual marking method using bubble marker (group I, n=40) or Callisto eye and Z align (group II, n=40). Postoperatively, accuracy of alignment and visual quality was assessed with a ray tracing aberrometer. Primary outcome measure was deviation from the target axis of implantation. Secondary outcome measures were visual quality and acuity. Follow-up was performed on postoperative days (PODs) 1 and 30. Results Deviation from the target axis of implantation was significantly less in group II on PODs 1 and 30 (group I: 5.5°±3.3°, group II: 3.6°±2.6°; p=0.005). Postoperative refractive cylinder was −0.89±0.35 D in group I and −0.64±0.36 D in group II (p=0.003). Visual acuity was comparable between both the groups. Visual quality measured in terms of Strehl ratio (p<0.05) and modulation transfer function (MTF) (p<0.05) was significantly better in the image-guided surgery group. Significant negative correlation was observed between deviation from target axis and visual quality parameters (Strehl ratio and MTF) (p<0.05). Conclusion Image-guided surgery allows precise alignment of toric IOL without need for reference marking. It is associated with superior visual quality which correlates with the precision of IOL alignment. PMID:29731603
Image quality assessment by preprocessing and full reference model combination
NASA Astrophysics Data System (ADS)
Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.
2009-01-01
This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.
On the assessment of visual communication by information theory
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1993-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori
2017-10-01
We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Use of images in shelf life assessment of fruit salad.
Manzocco, Lara; Rumignani, Alberto; Lagazio, Corrado
2012-07-01
Fruit salads stored for different lengths of time as well as their images were used to estimate sensory shelf life by survival analysis. Shelf life estimates obtained using fruit salad images were longer than those achieved by analyzing the real product. This was attributed to the fact that images are 2-dimensional representations of real food, probably not comprehensive of all the visual information needed by the panelists to produce an acceptability/unacceptability judgment. Images were also subjected to image analysis and the analysis of the overall visual quality by a trained panel. These indices proved to be highly correlated to consumer rejection of the fruit salad and could be exploited for routine shelf life assessment of analogous products. To this regard, a failure criterion of 25% consumer rejection could be equivalent to a score 3 in a 5-point overall visual quality scale. Food images can be used to assess product shelf life. In the case of fruit salads, the overall visual quality assessed by a trained panel on product images and the percentage of brown pixels in digital images can be exploited to estimate shelf life corresponding to a selected consumer rejection. © 2012 Institute of Food Technologists®
Experimental design and analysis of JND test on coded image/video
NASA Astrophysics Data System (ADS)
Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay
2015-09-01
The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.
Image gathering and digital restoration for fidelity and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1991-01-01
The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
NASA Astrophysics Data System (ADS)
Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte; Häkkinen, Jukka
2014-11-01
To understand the viewing strategies employed in a quality estimation task, we compared two visual tasks-quality estimation and difference estimation. The estimation was done for a pair of natural images having small global changes in quality. Two groups of observers estimated the same set of images, but with different instructions. One group estimated the difference in quality and the other the difference between image pairs. The results demonstrated the use of different visual strategies in the tasks. The quality estimation was found to include more visual planning during the first fixation than the difference estimation, but afterward needed only a few long fixations on the semantically important areas of the image. The difference estimation used many short fixations. Salient image areas were mainly attended to when these areas were also semantically important. The results support the hypothesis that these tasks' general characteristics (evaluation time, number of fixations, area fixated on) show differences in processing, but also suggest that examining only single fixations when comparing tasks is too narrow a view. When planning a subjective experiment, one must remember that a small change in the instructions might lead to a noticeable change in viewing strategy.
A Regression-Based Family of Measures for Full-Reference Image Quality Assessment
NASA Astrophysics Data System (ADS)
Oszust, Mariusz
2016-12-01
The advances in the development of imaging devices resulted in the need of an automatic quality evaluation of displayed visual content in a way that is consistent with human visual perception. In this paper, an approach to full-reference image quality assessment (IQA) is proposed, in which several IQA measures, representing different approaches to modelling human visual perception, are efficiently combined in order to produce objective quality evaluation of examined images, which is highly correlated with evaluation provided by human subjects. In the paper, an optimisation problem of selection of several IQA measures for creating a regression-based IQA hybrid measure, or a multimeasure, is defined and solved using a genetic algorithm. Experimental evaluation on four largest IQA benchmarks reveals that the multimeasures obtained using the proposed approach outperform state-of-the-art full-reference IQA techniques, including other recently developed fusion approaches.
Tugwell, J R; England, A; Hogg, P
2017-08-01
Physical and technical differences exist between imaging on an x-ray tabletop and imaging on a trolley. This study evaluates how trolley imaging impacts image quality and radiation dose for an antero-posterior (AP) pelvis projection whilst subsequently exploring means of optimising this imaging examination. An anthropomorphic pelvis phantom was imaged on a commercially available trolley under various conditions. Variables explored included two mattresses, two image receptor holder positions, three source to image distances (SIDs) and four mAs values. Image quality was evaluated using relative visual grading analysis with the reference image acquired on the x-ray tabletop. Contrast to noise ratio (CNR) was calculated. Effective dose was established using Monte Carlo simulation. Optimisation scores were derived as a figure of merit by dividing effective dose with visual image quality scores. Visual image quality reduced significantly (p < 0.05) whilst effective dose increased significantly (p < 0.05) for images acquired on the trolley using identical acquisition parameters to the reference image. The trolley image with the highest optimisation score was acquired using 130 cm SID, 20 mAs, the standard mattress and platform not elevated. A difference of 12.8 mm was found between the image with the lowest and highest magnification factor (18%). The acquisition parameters used for AP pelvis on the x-ray tabletop are not transferable to trolley imaging and should be modified accordingly to compensate for the differences that exist. Exposure charts should be developed for trolley imaging to ensure optimal image quality at lowest possible dose. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Information theoretical assessment of visual communication with wavelet coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur
1995-06-01
A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.
Content dependent selection of image enhancement parameters for mobile displays
NASA Astrophysics Data System (ADS)
Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo
2011-01-01
Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.
Piekarski, Eve; Chitiboi, Teodora; Ramb, Rebecca; Latson, Larry A; Bhatla, Puneet; Feng, Li; Axel, Leon
2017-01-01
Object Residual respiratory motion degrades image quality in conventional cardiac cine MRI (CCMR). We evaluated whether a free-breathing (FB) radial imaging CCMR sequence with compressed sensing reconstruction (eXtra-Dimension (e.g. cardiac and respiratory phases) Golden-angle RAdial Sparse Parallel, or XD-GRASP) could provide better image quality than a conventional Cartesian breath-held (BH) sequence, in an unselected population of patients undergoing clinical CCMR. Material and Methods 101 patients who underwent BH and FB imaging in a mid-ventricular short-axis plane at a matching location were included. Visual and quantitative image analysis was performed by two blinded experienced readers, using a 5-point qualitative scale to score overall image quality and visual signal-to-noise ratio (SNR) grade, with measures of noise and sharpness. End-diastole (ED) and end-systole (ES) left-ventricular areas were also measured and compared for both BH and FB images. Results Image quality was generally better with the BH cines (overall quality grade BH vs FB: 4 vs 2.9, p<0.001; noise 0.06 vs 0.08 p< 0.001; SNR grade: 4.1 vs 3, p<0.001), except for sharpness (p=0.48). There were no significant differences between BH and FB images regarding ED or ES areas (p=0.35 and 0.12). 18 of the 101 patients had impaired BH image quality (grades 1 or 2). In this subgroup, image quality of the FB images was better (p=0.0032), as was the SNR grade (p=0.003), but there were no significant differences regarding noise and sharpness (p=0.45, p=0.47). Conclusion Although FB XD-GRASP CCMR was visually inferior to conventional BH cardiac cine in general, it provided improved image quality in the subgroup of patients presenting respiratory motion-induced artifacts on breath-held images. PMID:29067539
Piekarski, Eve; Chitiboi, Teodora; Ramb, Rebecca; Latson, Larry A; Bhatla, Puneet; Feng, Li; Axel, Leon
2018-02-01
Residual respiratory motion degrades image quality in conventional cardiac cine MRI (CCMRI). We evaluated whether a free-breathing (FB) radial imaging CCMRI sequence with compressed sensing reconstruction [extradimensional (e.g. cardiac and respiratory phases) golden-angle radial sparse parallel, or XD-GRASP] could provide better image quality than a conventional Cartesian breath-held (BH) sequence in an unselected population of patients undergoing clinical CCMRI. One hundred one patients who underwent BH and FB imaging in a midventricular short-axis plane at a matching location were included. Visual and quantitative image analysis was performed by two blinded experienced readers, using a five-point qualitative scale to score overall image quality and visual signal-to-noise ratio (SNR) grade, with measures of noise and sharpness. End-diastolic and end-systolic left ventricular areas were also measured and compared for both BH and FB images. Image quality was generally better with the BH cines (overall quality grade for BH vs FB images 4 vs 2.9, p < 0.001; noise 0.06 vs 0.08 p < 0.001; SNR grade 4.1 vs 3, p < 0.001), except for sharpness (p = 0.48). There were no significant differences between BH and FB images regarding end-diastolic or end-systolic areas (p = 0.35 and p = 0.12). Eighteen of the 101 patients had poor BH image quality (grade 1 or 2). In this subgroup, the quality of the FB images was better (p = 0.0032), as was the SNR grade (p = 0.003), but there were no significant differences regarding noise and sharpness (p = 0.45 and p = 0.47). Although FB XD-GRASP CCMRI was visually inferior to conventional BH CCMRI in general, it provided improved image quality in the subgroup of patients with respiratory-motion-induced artifacts on BH images.
Xu, Renfeng; Wang, Huachun; Thibos, Larry N; Bradley, Arthur
2017-04-01
Our purpose is to develop a computational approach that jointly assesses the impact of stimulus luminance and pupil size on visual quality. We compared traditional optical measures of image quality and those that incorporate the impact of retinal illuminance dependent neural contrast sensitivity. Visually weighted image quality was calculated for a presbyopic model eye with representative levels of chromatic and monochromatic aberrations as pupil diameter was varied from 7 to 1 mm, stimulus luminance varied from 2000 to 0.1 cd/m2, and defocus varied from 0 to -2 diopters. The model included the effects of quantal fluctuations on neural contrast sensitivity. We tested the model's predictions for five cycles per degree gratings by measuring contrast sensitivity at 5 cyc/deg. Unlike the traditional Strehl ratio and the visually weighted area under the modulation transfer function, the visual Strehl ratio derived from the optical transfer function was able to capture the combined impact of optics and quantal noise on visual quality. In a well-focused eye, provided retinal illuminance is held constant as pupil size varies, visual image quality scales approximately as the square root of illuminance because of quantum fluctuations, but optimum pupil size is essentially independent of retinal illuminance and quantum fluctuations. Conversely, when stimulus luminance is held constant (and therefore illuminance varies with pupil size), optimum pupil size increases as luminance decreases, thereby compensating partially for increased quantum fluctuations. However, in the presence of -1 and -2 diopters of defocus and at high photopic levels where Weber's law operates, optical aberrations and diffraction dominate image quality and pupil optimization. Similar behavior was observed in human observers viewing sinusoidal gratings. Optimum pupil size increases as stimulus luminance drops for the well-focused eye, and the benefits of small pupils for improving defocused image quality remain throughout the photopic and mesopic ranges. However, restricting pupils to <2 mm will cause significant reductions in the best focus vision at low photopic and mesopic luminances.
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Ebrahimi, Touradj
2014-03-01
Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.
Image gathering and coding for digital restoration: Information efficiency and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar
1989-01-01
Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.
The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.
Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo
2014-12-15
Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.
Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong
2016-01-01
The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images’ spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines. PMID:27367703
Information, entropy, and fidelity in visual communication
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-ur
1992-10-01
This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering an display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.
Information, entropy and fidelity in visual communication
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering and display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.
Tan, T J; Lau, Kenneth K; Jackson, Dana; Ardley, Nicholas; Borasu, Adina
2017-04-01
The purpose of this study was to assess the efficacy of model-based iterative reconstruction (MBIR), statistical iterative reconstruction (SIR), and filtered back projection (FBP) image reconstruction algorithms in the delineation of ureters and overall image quality on non-enhanced computed tomography of the renal tracts (NECT-KUB). This was a prospective study of 40 adult patients who underwent NECT-KUB for investigation of ureteric colic. Images were reconstructed using FBP, SIR, and MBIR techniques and individually and randomly assessed by two blinded radiologists. Parameters measured were overall image quality, presence of ureteric calculus, presence of hydronephrosis or hydroureters, image quality of each ureteric segment, total length of ureters unable to be visualized, attenuation values of image noise, and retroperitoneal fat content for each patient. There were no diagnostic discrepancies between image reconstruction modalities for urolithiasis. Overall image qualities and for each ureteric segment were superior using MBIR (67.5 % rated as 'Good to Excellent' vs. 25 % in SIR and 2.5 % in FBP). The lengths of non-visualized ureteric segments were shortest using MBIR (55.0 % measured 'less than 5 cm' vs. ASIR 33.8 % and FBP 10 %). MBIR was able to reduce overall image noise by up to 49.36 % over SIR and 71.02 % over FBP. MBIR technique improves overall image quality and visualization of ureters over FBP and SIR.
Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.
Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis
2014-04-01
Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
Real-time phase-contrast x-ray imaging: a new technique for the study of animal form and function
Socha, John J; Westneat, Mark W; Harrison, Jon F; Waters, James S; Lee, Wah-Keat
2007-01-01
Background Despite advances in imaging techniques, real-time visualization of the structure and dynamics of tissues and organs inside small living animals has remained elusive. Recently, we have been using synchrotron x-rays to visualize the internal anatomy of millimeter-sized opaque, living animals. This technique takes advantage of partially-coherent x-rays and diffraction to enable clear visualization of internal soft tissue not viewable via conventional absorption radiography. However, because higher quality images require greater x-ray fluxes, there exists an inherent tradeoff between image quality and tissue damage. Results We evaluated the tradeoff between image quality and harm to the animal by determining the impact of targeted synchrotron x-rays on insect physiology, behavior and survival. Using 25 keV x-rays at a flux density of 80 μW/mm-2, high quality video-rate images can be obtained without major detrimental effects on the insects for multiple minutes, a duration sufficient for many physiological studies. At this setting, insects do not heat up. Additionally, we demonstrate the range of uses of synchrotron phase-contrast imaging by showing high-resolution images of internal anatomy and observations of labeled food movement during ingestion and digestion. Conclusion Synchrotron x-ray phase contrast imaging has the potential to revolutionize the study of physiology and internal biomechanics in small animals. This is the only generally applicable technique that has the necessary spatial and temporal resolutions, penetrating power, and sensitivity to soft tissue that is required to visualize the internal physiology of living animals on the scale from millimeters to microns. PMID:17331247
Moore, C S; Wood, T J; Beavis, A W; Saunderson, J R
2013-07-01
The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson's correlation coefficient. Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography.
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
Osawa, Atsushi; Miwa, Kenta; Wagatsuma, Kei; Takiguchi, Tomohiro; Tamura, Shintaro; Akimoto, Kenta
2012-01-01
The image quality in (18)FDG PET/CT often degrades as the body size increases. The purpose of this study was to evaluate the relationship between image quality and the body size using original phantoms of variable cross-sectional areas in PET/CT. We produced five water phantoms with different cross-sectional areas. The long axis of phantom was 925 mm, and the cross-sectional area was from 324 to 1189 cm(2). These phantoms with the sphere (diameter 10 mm) were filled with (18)F-FDG solution. The radioactivity concentration of background in the phantom was 1.37, 2.73, 4.09 and 5.46 kBq/mL. The scanning duration was 30 min in list mode acquisition for each measurement. Background variability (N(10 mm)), noise equivalent count rates (NECR(phantom)), hot sphere contrast (Q(H,10 mm)) as physical evaluation and visual score of sphere detection were measured, respectively. The relationship between image quality and the various cross-sectional areas was also analyzed under the above-mentioned conditions. As cross-sectional area increased, NECR(phantom) progressively decreased. Furthermore, as cross-sectional area increased, N(10 mm) increased and Q(H,10 mm) decreased. Image quality became degraded as body weight increased because noise and contrast contributed to image quality. The visual score of sphere detection deteriorated in high background radioactivity concentration because a false positive detection in cross-sectional area of the phantom increased. However, additional increases in scanning periods could improve the visual score. We assessed tendencies in the relationship between image quality and body size in PET/CT. Our results showed that time adjustment was more effective than dose adjustment for stable image quality of heavier patients in terms of the large cross-sectional area.
2012-01-01
Background The short inversion time inversion recovery (STIR) black-blood technique has been used to visualize myocardial edema, and thus to differentiate acute from chronic myocardial lesions. However, some cardiovascular magnetic resonance (CMR) groups have reported variable image quality, and hence the diagnostic value of STIR in routine clinical practice has been put into question. The aim of our study was to analyze image quality and diagnostic performance of STIR using a set of pulse sequence parameters dedicated to edema detection, and to discuss possible factors that influence image quality. We hypothesized that STIR imaging is an accurate and robust way of detecting myocardial edema in non-selected patients with acute myocardial infarction. Methods Forty-six consecutive patients with acute myocardial infarction underwent CMR (day 4.5, +/- 1.6) including STIR for the assessment of myocardial edema and late gadolinium enhancement (LGE) for quantification of myocardial necrosis. Thirty of these patients underwent a follow-up CMR at approximately six months (195 +/- 39 days). Both STIR and LGE images were evaluated separately on a segmental basis for image quality as well as for presence and extent of myocardial hyper-intensity, with both visual and semi-quantitative (threshold-based) analysis. LGE was used as a reference standard for localization and extent of myocardial necrosis (acute) or scar (chronic). Results Image quality of STIR images was rated as diagnostic in 99.5% of cases. At the acute stage, the sensitivity and specificity of STIR to detect infarcted segments on visual assessment was 95% and 78% respectively, and on semi-quantitative assessment was 99% and 83%, respectively. STIR differentiated acutely from chronically infarcted segments with a sensitivity of 95% by both methods and with a specificity of 99% by visual assessment and 97% by semi-quantitative assessment. The extent of hyper-intense areas on acute STIR images was 85% larger than those on LGE images, with a larger myocardial salvage index in reperfused than in non-reperfused infarcts (p = 0.035). Conclusions STIR with appropriate pulse sequence settings is accurate in detecting acute myocardial infarction (MI) and distinguishing acute from chronic MI with both visual and semi-quantitative analysis. Due to its unique technical characteristics, STIR should be regarded as an edema-weighted rather than a purely T2-weighted technique. PMID:22455461
Racadio, John M.; Abruzzo, Todd A.; Johnson, Neil D.; Patel, Manish N.; Kukreja, Kamlesh U.; den Hartog, Mark. J. H.; Hoornaert, Bart P.A.; Nachabe, Rami A.
2015-01-01
The purpose of this study was to reduce pediatric doses while maintaining or improving image quality scores without removing the grid from X‐ray beam. This study was approved by the Institutional Animal Care and Use Committee. Three piglets (5, 14, and 20 kg) were imaged using six different selectable detector air kerma (Kair) per frame values (100%, 70%, 50%, 35%, 25%, 17.5%) with and without the grid. Number of distal branches visualized with diagnostic confidence relative to the injected vessel defined image quality score. Five pediatric interventional radiologists evaluated all images. Image quality score and piglet Kair were statistically compared using analysis of variance and receiver operating curve analysis to define the preferred dose setting and use of grid for a visibility of 2nd and 3rd order vessel branches. Grid removal reduced both dose to subject and imaging quality by 26%. Third order branches could only be visualized with the grid present; 100% detector Kair was required for smallest pig, while 70% detector Kair was adequate for the two larger pigs. Second order branches could be visualized with grid at 17.5% detector Kair for all three pig sizes. Without the grid, 50%, 35%, and 35% detector Kair were required for smallest to largest pig, respectively. Grid removal reduces both dose and image quality score. Image quality scores can be maintained with less dose to subject with the grid in the beam as opposed to removed. Smaller anatomy requires more dose to the detector to achieve the same image quality score. PACS numbers: 87.53.Bn, 87.57.N‐, 87.57.cj, 87.59.cf, 87.59.Dj PMID:26699297
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Building conservation base on assessment of facade quality on Basuki Rachmat Street, Malang
NASA Astrophysics Data System (ADS)
Kurniawan, E. B.; Putri, R. Y. A.; Wardhani, D. K.
2017-06-01
Visual quality covers aspects of imageability which is associated with visual system and the element of distinction. Within a visual system of specific area, the physical quality may lead to a strong image. Here, the physical quality is one of important that make urban aesthetic. Build a discussion toward visual system of urban area, this paper aim is to identify the influencing factors in defining the façade’s visual quality of heritage buildings at Jend. Basuki Rahmat Street, Malang City, East Java-Indonesia. This Street is a main road of Malang city center that was built by Dutch colonial government. It was designed by IR. Thomas Kartsten. It was known as one of Malang area that have good visual quality. In order to idenfity the influencing factors, this paper conducts Multiple linear regression as a tools of analysis. The examined potential factors are resulted from of architecture and urban design expert’s assessment to each building’s segment at Jend. Basuki Rahmat. Finally, this paper reveals that the influencing factors are color, rhythm, and proportion. This is demonstrated by the results model: Visual quality (Y) = 0.304 + 0.21 Colors(X5) + 0.221 rhythm (X6) + 0.304 proportion (X7). Furthermore, the recommendation of the building facade will be made based on this model and study of historical and typology building in Basuki Rachmat Street.
Application of machine learning for the evaluation of turfgrass plots using aerial images
NASA Astrophysics Data System (ADS)
Ding, Ke; Raheja, Amar; Bhandari, Subodh; Green, Robert L.
2016-05-01
Historically, investigation of turfgrass characteristics have been limited to visual ratings. Although relevant information may result from such evaluations, final inferences may be questionable because of the subjective nature in which the data is collected. Recent advances in computer vision techniques allow researchers to objectively measure turfgrass characteristics such as percent ground cover, turf color, and turf quality from the digital images. This paper focuses on developing a methodology for automated assessment of turfgrass quality from aerial images. Images of several turfgrass plots of varying quality were gathered using a camera mounted on an unmanned aerial vehicle. The quality of these plots were also evaluated based on visual ratings. The goal was to use the aerial images to generate quality evaluations on a regular basis for the optimization of water treatment. Aerial images are used to train a neural network so that appropriate features such as intensity, color, and texture of the turfgrass are extracted from these images. Neural network is a nonlinear classifier commonly used in machine learning. The output of the neural network trained model is the ratings of the grass, which is compared to the visual ratings. Currently, the quality and the color of turfgrass, measured as the greenness of the grass, are evaluated. The textures are calculated using the Gabor filter and co-occurrence matrix. Other classifiers such as support vector machines and simpler linear regression models such as Ridge regression and LARS regression are also used. The performance of each model is compared. The results show encouraging potential for using machine learning techniques for the evaluation of turfgrass quality and color.
The Effect of Image Quality, Repeated Study, and Assessment Method on Anatomy Learning
ERIC Educational Resources Information Center
Fenesi, Barbara; Mackinnon, Chelsea; Cheng, Lucia; Kim, Joseph A.; Wainman, Bruce C.
2017-01-01
The use of two-dimensional (2D) images is consistently used to prepare anatomy students for handling real specimen. This study examined whether the quality of 2D images is a critical component in anatomy learning. The visual clarity and consistency of 2D anatomical images was systematically manipulated to produce low-quality and high-quality…
Clinical comparison of CR and screen film for imaging the critically ill neonate
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Brasch, Robert C.; Gooding, Charles A.; Gould, Robert G.; Cohen, Pierre A.; Rencken, Ingo R.; Huang, H. K.
1996-05-01
A clinical comparison of computed radiography (CR) versus screen-film for imaging the critically-ill neonate is performed, utilizing a modified (hybrid) film cassette containing a CR (standard ST-V) imaging plate, a conventional screen and film, allowing simultaneous acquisition of perfectly matched CR and plain film images. For 100 portable neonatal chest and abdominal projection radiographs, plain film was subjectively compared to CR hardcopy. Three pediatric radiologists graded overall image quality on a scale of one (poor) to five (excellent), as well as visualization of various anatomic structures (i.e., lung parenchyma, pulmonary vasculature, tubes/lines) and pathological findings (i.e., pulmonary interstitial emphysema, pleural effusion, pneumothorax). Results analyzed using a combined kappa statistic of the differences between scores from each matched set, combined over the three readers showed no statistically significant difference in overall image quality between screen- film and CR (p equals 0.19). Similarly, no statistically significant difference was seen between screen-film and CR for anatomic structure visualization and for visualization of pathological findings. These results indicate that the image quality of CR is comparable to plain film, and that CR may be a suitable alternative to screen-film imaging for portable neonatal chest and abdominal examinations.
Quality metrics for sensor images
NASA Technical Reports Server (NTRS)
Ahumada, AL
1993-01-01
Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.
Optical cylinder designs to increase the field of vision in the osteo-odonto-keratoprosthesis.
Hull, C C; Liu, C S; Sciscio, A; Eleftheriadis, H; Herold, J
2000-12-01
The single optical cylinders used in the osteo-odonto-keratoprosthesis (OOKP) are known to produce very small visual fields. Values of 40 degrees are typically quoted. The purpose of this paper is to present designs for new optical cylinders that significantly increase the field of view and therefore improve the visual rehabilitation of patients having an OOKP. Computer ray-tracing techniques were used to design and analyse improved one- and two-piece optical cylinders made from polymethyl methacrylate. All designs were required to have a potential visual acuity of 6/6 before consideration was given to the visual field and optimising off-axis image quality. Aspheric surfaces were used where this significantly improved off-axis image quality. Single optical cylinders, with increased posterior cylinder (intraocular) diameters, gave an increase in the theoretical visual field of 18% (from 76 degrees to 90 degrees) over current designs. Two-piece designs based on an inverted telephoto principle gave theoretical field angles over 120 degrees. Aspheric surfaces were shown to improve the off-axis image quality while maintaining a potential visual acuity of at least 6/6. This may well increase the measured visual field by improving the retinal illuminance off-axis. Results demonstrate that it is possible to significantly increase the theoretical maximum visual field through OOKP optical cylinders. Such designs will improve the visual rehabilitation of patients undergoing this procedure.
Retinal Image Simulation of Subjective Refraction Techniques.
Perches, Sara; Collados, M Victoria; Ares, Jorge
2016-01-01
Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient's response-guided refraction) is the most commonly used approach. In this context, this paper's main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques--including Jackson's Cross-Cylinder test (JCC)--relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software's usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training.
PSQM-based RR and NR video quality metrics
NASA Astrophysics Data System (ADS)
Lu, Zhongkang; Lin, Weisi; Ong, Eeping; Yang, Xiaokang; Yao, Susu
2003-06-01
This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-based metric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.
Wood, T J; Beavis, A W; Saunderson, J R
2013-01-01
Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362
Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai
2013-05-01
Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
Plaza-Puche, Ana B; Alió, Jorge L; MacRae, Scott; Zheleznyak, Len; Sala, Esperanza; Yoon, Geunyoung
2015-05-01
To investigate the correlations existing between a trifocal intraocular lens (IOL) and a varifocal IOL using the "ex vivo" optical bench through-focus image quality analysis and the clinical visual performance in real patients by study of the defocus curves. This prospective, consecutive, nonrandomized, comparative study included a total of 64 eyes of 42 patients. Three groups of eyes were differentiated according to the IOL implanted: 22 eyes implanted with the varifocal Lentis Mplus LS-313 IOL (Oculentis GmbH, Berlin, Germany); 22 eyes implanted with the trifocal FineVision IOL (Physiol, Liege, Belgium), and 20 eyes implanted with the monofocal Acrysof SA60AT IOL (Alcon Laboratories, Inc., Fort Worth, TX). Visual outcomes and defocus curve were evaluated postoperatively. Optical bench through-focus performance was quantified by computing an image quality metric and the cross-correlation coefficient between an unaberrated reference image and captured retinal images from a model eye with a 3.0-mm artificial pupil. Statistically significant differences among defocus curves of different IOLs were detected for the levels of defocus from -4.00 to -1.00 diopters (D) (P < .01). Significant correlations were found between the optical bench image quality metric results and logMAR visual acuity scale in all groups (Lentis Mplus group: r = -0.97, P < .01; FineVision group: r = -0.82, P < .01; Acrys of group: r = -0.99, P < .01). Linear predicting models were obtained. Significant correlations were found between logMAR visual acuity and image quality metric for the multifocal and monofocal IOLs analyzed. This finding enables surgeons to predict visual outcomes from the optical bench analysis. Copyright 2015, SLACK Incorporated.
Visual communication - Information and fidelity. [of images
NASA Technical Reports Server (NTRS)
Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.
1993-01-01
This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.
Color extended visual cryptography using error diffusion.
Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu
2011-01-01
Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
Prostate seed implant quality assessment using MR and CT image fusion.
Amdur, R J; Gladstone, D; Leopold, K A; Harris, R D
1999-01-01
After a seed implant of the prostate, computerized tomography (CT) is ideal for determining seed distribution but soft tissue anatomy is frequently not well visualized. Magnetic resonance (MR) images soft tissue anatomy well but seed visualization is problematic. We describe a method of fusing CT and MR images to exploit the advantages of both of these modalities when assessing the quality of a prostate seed implant. Eleven consecutive prostate seed implant patients were imaged with axial MR and CT scans. MR and CT images were fused in three dimensions using the Pinnacle 3.0 version of the ADAC treatment planning system. The urethra and bladder base were used to "line up" MR and CT image sets during image fusion. Alignment was accomplished using translation and rotation in the three ortho-normal planes. Accuracy of image fusion was evaluated by calculating the maximum deviation in millimeters between the center of the urethra on axial MR versus CT images. Implant quality was determined by comparing dosimetric results to previously set parameters. Image fusion was performed with a high degree of accuracy. When lining up the urethra and base of bladder, the maximum difference in axial position of the urethra between MR and CT averaged 2.5 mm (range 1.3-4.0 mm, SD 0.9 mm). By projecting CT-derived dose distributions over MR images of soft tissue structures, qualitative and quantitative evaluation of implant quality is straightforward. The image-fusion process we describe provides a sophisticated way of assessing the quality of a prostate seed implant. Commercial software makes the process time-efficient and available to any clinical practice with a high-quality treatment planning system. While we use MR to image soft tissue structures, the process could be used with any imaging modality that is able to visualize the prostatic urethra (e.g., ultrasound).
Infrared image enhancement using H(infinity) bounds for surveillance applications.
Qidwai, Uvais
2008-08-01
In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.
Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression
NASA Astrophysics Data System (ADS)
Daly, Scott J.
1989-08-01
The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.
Paediatric x-ray radiation dose reduction and image quality analysis.
Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H
2013-09-01
Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.
Retinal Image Simulation of Subjective Refraction Techniques
Perches, Sara; Collados, M. Victoria; Ares, Jorge
2016-01-01
Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient’s response-guided refraction) is the most commonly used approach. In this context, this paper’s main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques—including Jackson’s Cross-Cylinder test (JCC)—relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software’s usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training. PMID:26938648
Developing Matlab scripts for image analysis and quality assessment
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.
2011-11-01
Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Thirteen ways to say nothing with scientific visualization
NASA Technical Reports Server (NTRS)
Globus, AL; Raible, E.
1992-01-01
Scientific visualization can be used to produce very beautiful images. Frequently, users and others not properly initiated into mysteries of visualization research fail to appreciate the artistic qualities of these images. Scientists will frequently use our work to needlessly understand the data from which it is derived. This paper describes a number of effective techniques to confound such pernicious activity.
Towards a Visual Quality Metric for Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1998-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Automated Assessment of Visual Quality of Digital Video
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.
Effects of task and image properties on visual-attention deployment in image-quality assessment
NASA Astrophysics Data System (ADS)
Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid
2015-03-01
It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.
Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.
Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi
2018-04-01
In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.
Fuzzy Logic-based expert system for evaluating cake quality of freeze-dried formulations.
Trnka, Hjalte; Wu, Jian X; Van De Weert, Marco; Grohganz, Holger; Rantanen, Jukka
2013-12-01
Freeze-drying of peptide and protein-based pharmaceuticals is an increasingly important field of research. The diverse nature of these compounds, limited understanding of excipient functionality, and difficult-to-analyze quality attributes together with the increasing importance of the biosimilarity concept complicate the development phase of safe and cost-effective drug products. To streamline the development phase and to make high-throughput formulation screening possible, efficient solutions for analyzing critical quality attributes such as cake quality with minimal material consumption are needed. The aim of this study was to develop a fuzzy logic system based on image analysis (IA) for analyzing cake quality. Freeze-dried samples with different visual quality attributes were prepared in well plates. Imaging solutions together with image analytical routines were developed for extracting critical visual features such as the degree of cake collapse, glassiness, and color uniformity. On the basis of the IA outputs, a fuzzy logic system for analysis of these freeze-dried cakes was constructed. After this development phase, the system was tested with a new screening well plate. The developed fuzzy logic-based system was found to give comparable quality scores with visual evaluation, making high-throughput classification of cake quality possible. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Towards A Complete Model Of Photopic Visual Threshold Performance
NASA Astrophysics Data System (ADS)
Overington, I.
1982-02-01
Based on a wide variety of fragmentary evidence taken from psycho-physics, neurophysiology and electron microscopy, it has been possible to put together a very widely applicable conceptual model of photopic visual threshold performance. Such a model is so complex that a single comprehensive mathematical version is excessively cumbersome. It is, however, possible to set up a suite of related mathematical models, each of limited application but strictly known envelope of usage. Such models may be used for assessment of a variety of facets of visual performance when using display imagery, including effects and interactions of image quality, random and discrete display noise, viewing distance, image motion, etc., both for foveal interrogation tasks and for visual search tasks. The specific model may be selected from the suite according to the assessment task in hand. The paper discusses in some depth the major facets of preperceptual visual processing and their interaction with instrumental image quality and noise. It then highlights the statistical nature of visual performance before going on to consider a number of specific mathematical models of partial visual function. Where appropriate, these are compared with widely popular empirical models of visual function.
Recent progress in the development of ISO 19751
NASA Astrophysics Data System (ADS)
Farnand, Susan P.; Dalal, Edul N.; Ng, Yee S.
2006-01-01
A small number of general visual attributes have been recognized as essential in describing image quality. These include micro-uniformity, macro-uniformity, colour rendition, text and line quality, gloss, sharpness, and spatial adjacency or temporal adjacency attributes. The multiple-part International Standard discussed here was initiated by the INCITS W1 committee on the standardization of office equipment to address the need for unambiguously documented procedures and methods, which are widely applicable over the multiple printing technologies employed in office applications, for the appearance-based evaluation of these visually significant image quality attributes of printed image quality. 1,2 The resulting proposed International Standard, for which ISO/IEC WD 19751-1 3 presents an overview and an outline of the overall procedure and common methods, is based on a proposal that was predicated on the idea that image quality could be described by a small set of broad-based attributes. 4 Five ad hoc teams were established (now six since a sharpness team is in the process of being formed) to generate standards for one or more of these image quality attributes. Updates on the colour rendition, text and line quality, and gloss attributes are provided.
NASA Astrophysics Data System (ADS)
Heisler, Morgan; Lee, Sieun; Mammo, Zaid; Jian, Yifan; Ju, Myeong Jin; Miao, Dongkai; Raposo, Eric; Wahl, Daniel J.; Merkur, Andrew; Navajas, Eduardo; Balaratnasingam, Chandrakumar; Beg, Mirza Faisal; Sarunic, Marinko V.
2017-02-01
High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed. In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.
Hay, Peter D; Smith, Julie; O'Connor, Richard A
2016-02-01
The aim of this study was to evaluate the benefits to SPECT bone scan image quality when applying resolution recovery (RR) during image reconstruction using software provided by a third-party supplier. Bone SPECT data from 90 clinical studies were reconstructed retrospectively using software supplied independent of the gamma camera manufacturer. The current clinical datasets contain 120×10 s projections and are reconstructed using an iterative method with a Butterworth postfilter. Five further reconstructions were created with the following characteristics: 10 s projections with a Butterworth postfilter (to assess intraobserver variation); 10 s projections with a Gaussian postfilter with and without RR; and 5 s projections with a Gaussian postfilter with and without RR. Two expert observers were asked to rate image quality on a five-point scale relative to our current clinical reconstruction. Datasets were anonymized and presented in random order. The benefits of RR on image scores were evaluated using ordinal logistic regression (visual grading regression). The application of RR during reconstruction increased the probability of both observers of scoring image quality as better than the current clinical reconstruction even where the dataset contained half the normal counts. Type of reconstruction and observer were both statistically significant variables in the ordinal logistic regression model. Visual grading regression was found to be a useful method for validating the local introduction of technological developments in nuclear medicine imaging. RR, as implemented by the independent software supplier, improved bone SPECT image quality when applied during image reconstruction. In the majority of clinical cases, acquisition times for bone SPECT intended for the purposes of localization can safely be halved (from 10 s projections to 5 s) when RR is applied.
Shah, Benoy N; Chahal, Navtej S; Kooner, Jaspal S; Senior, Roxy
2017-05-01
Carotid intima-media thickness (IMT) and plaque are recognized markers of increased risk for cerebrovascular events. Accurate visualization of the IMT and plaques is dependent upon image quality. Ultrasound contrast agents improve image quality during echocardiography-this study assessed whether contrast-enhanced ultrasound (CEUS) improves carotid IMT visualization and plaque detection in an asymptomatic population. Individuals free from known cardiovascular disease, enrolled in a community study, underwent B-mode and CEUS carotid imaging. Each carotid artery was divided into 10 segments (far and near walls of the proximal, mid and distal segments of the common carotid artery, the carotid bulb, and internal carotid artery). Visualization of the IMT complex and plaque assessments was made during both B-mode and CEUS imaging for all enrolled subjects, a total of 175 individuals (mean age 65±9 years). Visualization of the IMT was significantly improved during CEUS compared with B-mode imaging, in both near and far walls of the carotid arteries (% IMT visualization during B-mode vs CEUS imaging: 61% vs 94% and 66% vs 95% for right and left carotid arteries, respectively, P<.001 for both). Additionally, a greater number of plaques were detected during CEUS imaging compared with B-mode imaging (367 plaques vs 350 plaques, P=.02). Contrast-enhanced ultrasound improves visualization of the intima-media complex, in both near and far walls, of the common and internal carotid arteries and permits greater detection of carotid plaques. Further studies are required to determine whether there is incremental clinical and prognostic benefit related to superior plaque detection by CEUS. © 2017, Wiley Periodicals, Inc.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
A no-reference image and video visual quality metric based on machine learning
NASA Astrophysics Data System (ADS)
Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy
2018-04-01
The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.
An algorithm for encryption of secret images into meaningful images
NASA Astrophysics Data System (ADS)
Kanso, A.; Ghebleh, M.
2017-03-01
Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.
Efficiency analysis of color image filtering
NASA Astrophysics Data System (ADS)
Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Abramov, Sergey K.; Egiazarian, Karen O.; Astola, Jaakko T.
2011-12-01
This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.
Blind image quality assessment via probabilistic latent semantic analysis.
Yang, Xichen; Sun, Quansen; Wang, Tianshu
2016-01-01
We propose a blind image quality assessment that is highly unsupervised and training free. The new method is based on the hypothesis that the effect caused by distortion can be expressed by certain latent characteristics. Combined with probabilistic latent semantic analysis, the latent characteristics can be discovered by applying a topic model over a visual word dictionary. Four distortion-affected features are extracted to form the visual words in the dictionary: (1) the block-based local histogram; (2) the block-based local mean value; (3) the mean value of contrast within a block; (4) the variance of contrast within a block. Based on the dictionary, the latent topics in the images can be discovered. The discrepancy between the frequency of the topics in an unfamiliar image and a large number of pristine images is applied to measure the image quality. Experimental results for four open databases show that the newly proposed method correlates well with human subjective judgments of diversely distorted images.
Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study
NASA Technical Reports Server (NTRS)
Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia
2015-01-01
Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.
Digitizing Images for Curriculum 21: Phase II.
ERIC Educational Resources Information Center
Walker, Alice D.
Although visual databases exist for the study of art, architecture, geography, health care, and other areas, readily accessible sources of quality images are not available for engineering faculty interested in developing multimedia modules or for student projects. Presented here is a brief review of Phase I of the Engineering Visual Database…
Dynamic simulation of the effect of soft toric contact lenses movement on retinal image quality.
Niu, Yafei; Sarver, Edwin J; Stevenson, Scott B; Marsack, Jason D; Parker, Katrina E; Applegate, Raymond A
2008-04-01
To report the development of a tool designed to dynamically simulate the effect of soft toric contact lens movement on retinal image quality, initial findings on three eyes, and the next steps to be taken to improve the utility of the tool. Three eyes of two subjects wearing soft toric contact lenses were cyclopleged with 1% cyclopentolate and 2.5% phenylephrine. Four hundred wavefront aberration measurements over a 5-mm pupil were recorded during soft contact lens wear at 30 Hz using a complete ophthalmic analysis system aberrometer. Each wavefront error measurement was input into Visual Optics Laboratory (version 7.15, Sarver and Associates, Inc.) to generate a retinal simulation of a high contrast log MAR visual acuity chart. The individual simulations were combined into a single dynamic movie using a custom MatLab PsychToolbox program. Visual acuity was measured for each eye reading the movie with best cycloplegic spectacle correction through a 3-mm artificial pupil to minimize the influence of the eyes' uncorrected aberrations. Comparison of the simulated acuity was made to values recorded while the subject read unaberrated charts with contact lenses through a 5-mm artificial pupil. For one study eye, average acuity was the same as the natural contact lens viewing condition. For the other two study eyes visual acuity of the best simulation was more than one line worse than natural viewing conditions. Dynamic simulation of retinal image quality, although not yet perfect, is a promising technique for visually illustrating the optical effects on image quality because of the movements of alignment-sensitive corrections.
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
The role of extra-foveal processing in 3D imaging
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2017-03-01
The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).
NASA Astrophysics Data System (ADS)
Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.
2010-01-01
Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.
Automated reference-free detection of motion artifacts in magnetic resonance images.
Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios
2018-04-01
Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.
Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics
NASA Astrophysics Data System (ADS)
King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.
2011-03-01
The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.
Degraded visual environment image/video quality metrics
NASA Astrophysics Data System (ADS)
Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.
2014-06-01
A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
NASA Astrophysics Data System (ADS)
Tanaka, Osamu; Iida, Takayoshi; Komeda, Hisao; Tamaki, Masayoshi; Seike, Kensaku; Kato, Daiki; Yokoyama, Takamasa; Hirose, Shigeki; Kawaguchi, Daisuke
2016-12-01
Visualization of markers is critical for imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). However, the size of the marker varies according to the imaging technique. While a large-sized marker is more useful for visualization in MRI, it results in artifacts on CT and causes substantial pain on administration. In contrast, a small-sized marker reduces the artifacts on CT but hampers MRI detection. Herein, we report a new ironcontaining marker and compare its utility with that of non-iron-containing markers. Five patients underwent CT/MRI fusion-based intensity-modulated radiotherapy, and the markers were placed by urologists. A Gold Anchor™ (GA; diameter, 0.28 mm; length, 10 mm) was placed using a 22G needle on the right side of the prostate. A VISICOIL™ (VIS; diameter, 0.35 mm; length, 10 mm) was placed using a 19G needle on the left side. MRI was performed using T2*-weighted imaging. Three observers evaluated and scored the visual qualities of the acquired images. The mean score of visualization was almost identical between the GA and VIS in radiography and cone-beam CT (Novalis Tx). The artifacts in planning CT were slightly larger using the GA than using the VIS. The visualization of the marker on MRI using the GA was superior to that using the VIS. In conclusion, the visualization quality of radiography, conebeam CT, and planning CT was roughly equal between the GA and VIS. However, the GA was more strongly visualized than was the VIS on MRI due to iron containing.
A new approach to subjectively assess quality of plenoptic content
NASA Astrophysics Data System (ADS)
Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj
2016-09-01
Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.
High-quality compressive ghost imaging
NASA Astrophysics Data System (ADS)
Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun
2018-04-01
We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
2D and 3D visualization methods of endoscopic panoramic bladder images
NASA Astrophysics Data System (ADS)
Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til
2011-03-01
While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.
NASA Astrophysics Data System (ADS)
Tingberg, Anders Martin
Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.
Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.
Orbán, Levente L; Chartier, Sylvain
2015-01-01
Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.
Heiland, Max; Pohlenz, Philipp; Blessmann, Marco; Habermann, Christian R; Oesterhelweg, Lars; Begemann, Philipp C; Schmidgunst, Christian; Blake, Felix A S; Püschel, Klaus; Schmelzle, Rainer; Schulze, Dirk
2007-12-01
The aim of this study was to evaluate soft tissue image quality of a mobile cone-beam computed tomography (CBCT) scanner with an integrated flat-panel detector. Eight fresh human cadavers were used in this study. For evaluation of soft tissue visualization, CBCT data sets and corresponding computed tomography (CT) and magnetic resonance imaging (MRI) data sets were acquired. Evaluation was performed with the help of 10 defined cervical anatomical structures. The statistical analysis of the scoring results of 3 examiners revealed the CBCT images to be of inferior quality regarding the visualization of most of the predefined structures. Visualization without a significant difference was found regarding the demarcation of the vertebral bodies and the pyramidal cartilages, the arteriosclerosis of the carotids (compared with CT), and the laryngeal skeleton (compared with MRI). Regarding arteriosclerosis of the carotids compared with MRI, CBCT proved to be superior. The integration of a flat-panel detector improves soft tissue visualization using a mobile CBCT scanner.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
White constancy method for mobile displays
NASA Astrophysics Data System (ADS)
Yum, Ji Young; Park, Hyun Hee; Jang, Seul Ki; Lee, Jae Hyang; Kim, Jong Ho; Yi, Ji Young; Lee, Min Woo
2014-03-01
In these days, consumer's needs for image quality of mobile devices are increasing as smartphone is widely used. For example, colors may be perceived differently when displayed contents under different illuminants. Displayed white in incandescent lamp is perceived as bluish, while same content in LED light is perceived as yellowish. When changed in perceived white under illuminant environment, image quality would be degraded. Objective of the proposed white constancy method is restricted to maintain consistent output colors regardless of the illuminants utilized. Human visual experiments are performed to analyze viewers'perceptual constancy. Participants are asked to choose the displayed white in a variety of illuminants. Relationship between the illuminants and the selected colors with white are modeled by mapping function based on the results of human visual experiments. White constancy values for image control are determined on the predesigned functions. Experimental results indicate that propsed method yields better image quality by keeping the display white.
An in vitro comparison of subjective image quality of panoramic views acquired via 2D or 3D imaging.
Pittayapat, P; Galiti, D; Huang, Y; Dreesen, K; Schreurs, M; Souza, P Couto; Rubira-Bullen, I R F; Westphalen, F H; Pauwels, R; Kalema, G; Willems, G; Jacobs, R
2013-01-01
The objective of this study is to compare subjective image quality and diagnostic validity of cone-beam CT (CBCT) panoramic reformatting with digital panoramic radiographs. Four dry human skulls and two formalin-fixed human heads were scanned using nine different CBCTs, one multi-slice CT (MSCT) and one standard digital panoramic device. Panoramic views were generated from CBCTs in four slice thicknesses. Seven observers scored image quality and visibility of 14 anatomical structures. Four observers repeated the observation after 4 weeks. Digital panoramic radiographs showed significantly better visualization of anatomical structures except for the condyle. Statistical analysis of image quality showed that the 3D imaging modalities (CBCTs and MSCT) were 7.3 times more likely to receive poor scores than the 2D modality. Yet, image quality from NewTom VGi® and 3D Accuitomo 170® was almost equivalent to that of digital panoramic radiographs with respective odds ratio estimates of 1.2 and 1.6 at 95% Wald confidence limits. A substantial overall agreement amongst observers was found. Intra-observer agreement was moderate to substantial. While 2D-panoramic images are significantly better for subjective diagnosis, 2/3 of the 3D-reformatted panoramic images are moderate or good for diagnostic purposes. Panoramic reformattings from particular CBCTs are comparable to digital panoramic images concerning the overall image quality and visualization of anatomical structures. This clinically implies that a 3D-derived panoramic view can be generated for diagnosis with a recommended 20-mm slice thickness, if CBCT data is a priori available for other purposes.
Evolution of mammographic image quality in the state of Rio de Janeiro*
Villar, Vanessa Cristina Felippe Lopes; Seta, Marismary Horsth De; de Andrade, Carla Lourenço Tavares; Delamarque, Elizabete Vianna; de Azevedo, Ana Cecília Pedrosa
2015-01-01
Objective To evaluate the evolution of mammographic image quality in the state of Rio de Janeiro on the basis of parameters measured and analyzed during health surveillance inspections in the period from 2006 to 2011. Materials and Methods Descriptive study analyzing parameters connected with imaging quality of 52 mammography apparatuses inspected at least twice with a one-year interval. Results Amongst the 16 analyzed parameters, 7 presented more than 70% of conformity, namely: compression paddle pressure intensity (85.1%), films development (72.7%), film response (72.7%), low contrast fine detail (92.2%), tumor mass visualization (76.5%), absence of image artifacts (94.1%), mammography-specific developers availability (88.2%). On the other hand, relevant parameters were below 50% conformity, namely: monthly image quality control testing (28.8%) and high contrast details with respect to microcalcifications visualization (47.1%). Conclusion The analysis revealed critical situations in terms of compliance with the health surveillance standards. Priority should be given to those mammography apparatuses that remained non-compliant at the second inspection performed within the one-year interval. PMID:25987749
Visual air quality simulation techniques
NASA Astrophysics Data System (ADS)
Molenar, John V.; Malm, William C.; Johnson, Christopher E.
Visual air quality is primarily a human perceptual phenomenon beginning with the transfer of image-forming information through an illuminated, scattering and absorbing atmosphere. Visibility, especially the visual appearance of industrial emissions or the degradation of a scenic view, is the principal atmospheric characteristic through which humans perceive air pollution, and is more sensitive to changing pollution levels than any other air pollution effect. Every attempt to quantify economic costs and benefits of air pollution has indicated that good visibility is a highly valued and desired environmental condition. Measurement programs can at best approximate the state of the ambient atmosphere at a few points in a scenic vista viewed by an observer. To fully understand the visual effect of various changes in the concentration and distribution of optically important atmospheric pollutants requires the use of aerosol and radiative transfer models. Communication of the output of these models to scientists, decision makers and the public is best done by applying modern image-processing systems to generate synthetic images representing the modeled air quality conditions. This combination of modeling techniques has been under development for the past 15 yr. Initially, visual air quality simulations were limited by a lack of computational power to simplified models depicting Gaussian plumes or uniform haze conditions. Recent explosive growth in low cost, high powered computer technology has allowed the development of sophisticated aerosol and radiative transfer models that incorporate realistic terrain, multiple scattering, non-uniform illumination, varying spatial distribution, concentration and optical properties of atmospheric constituents, and relative humidity effects on aerosol scattering properties. This paper discusses these improved models and image-processing techniques in detail. Results addressing uniform and non-uniform layered haze conditions in both urban and remote pristine areas will be presented.
Indirect gonioscopy system for imaging iridocorneal angle of eye
NASA Astrophysics Data System (ADS)
Perinchery, Sandeep M.; Fu, Chan Yiu; Baskaran, Mani; Aung, Tin; Murukeshan, V. M.
2017-08-01
Current clinical optical imaging systems do not provide sufficient structural information of trabecular meshwork (TM) in the iridocorneal angle (ICA) of the eye due to their low resolution. Increase in the intraocular pressure (IOP) can occur due to the abnormalities in TM, which could subsequently lead to glaucoma. Here, we present an indirect gonioscopy based imaging probe with significantly improved visualization of structures in the ICA including TM region, compared to the currently available tools. Imaging quality of the developed system was tested in porcine samples. Improved direct high quality visualization of the TM region through this system can be used for Laser trabeculoplasty, which is a primary treatment of glaucoma. This system is expected to be used complementary to angle photography and gonioscopy.
Underwater image enhancement through depth estimation based on random forest
NASA Astrophysics Data System (ADS)
Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han
2017-11-01
Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.
Denoising imaging polarimetry by adapted BM3D method.
Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R
2018-04-01
In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging
Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.
2013-01-01
Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895
Simultaneous analysis and quality assurance for diffusion tensor imaging.
Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A
2013-01-01
Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.
A survey of infrared and visual image fusion methods
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian
2017-09-01
Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.
Research on assessment and improvement method of remote sensing image reconstruction
NASA Astrophysics Data System (ADS)
Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping
2018-01-01
Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.
Information Hiding: an Annotated Bibliography
1999-04-13
parameters needed for reconstruction are enciphered using DES . The encrypted image is hidden in a cover image . [153] 074115, ‘Watermarking algorithm ...authors present a block based watermarking algorithm for digital images . The D.C.T. of the block is increased by a certain value. Quality control is...includes evaluation of the watermark robustness and the subjec- tive visual image quality. Two algorithms use the frequency domain while the two others use
Lundin, Margareta; Lidén, Mats; Magnuson, Anders; Mohammed, Ahmed Abdulilah; Geijer, Håkan; Andersson, Torbjörn; Persson, Anders
2012-07-01
Dual-energy computed tomography (DECT) has been shown to be useful for subtracting bone or calcium in CT angiography and gives an opportunity to produce a virtual non-contrast-enhanced (VNC) image from a series where contrast agents have been given intravenously. High noise levels and low resolution have previously limited the diagnostic value of the VNC images created with the first generation of DECT. With the recent introduction of a second generation of DECT, there is a possibility of obtaining VNC images with better image quality at hopefully lower radiation dose compared to the previous generation. To compare the image quality of the single-energy series to a VNC series obtained with a two generations of DECT scanners. CT of the urinary tract was used as a model. Thirty patients referred for evaluation of hematuria were examined with an older system (Somatom Definition) and another 30 patients with a new generation (Somatom Definition Flash). One single-energy series was obtained before and one dual-energy series after administration of intravenous contrast media. We created a VNC series from the contrast-enhanced images. Images were assessed concerning image quality with a visual grading scale evaluation of the VNC series with the single-energy series as gold standard. The image quality of the VNC images was rated inferior to the single-energy variant for both scanners, OR 11.5-67.3 for the Definition and OR 2.1-2.8 for the Definition Flash. Visual noise and overall quality were regarded as better with Flash than Definition. Image quality of VNC images obtained with the new generation of DECT is still slightly inferior compared to native images. However, the difference is smaller with the new compared to the older system.
Evaluating wood failure in plywood shear by optical image analysis
Charles W. McMillin
1984-01-01
This exploratory study evaulates the potential of using an automatic image analysis method to measure percent wood failure in plywood shear specimens. The results suggest that this method my be as accurate as the visual method in tracking long-term gluebond quality. With further refinement, the method could lead to automated equipment replacing the subjective visual...
Appleton, P L; Quyn, A J; Swift, S; Näthke, I
2009-05-01
Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of =10-15 mum into the sample, further compounding the ability to image at high-resolution deep within tissue. We show that manipulating the refractive index of the mounting media and decreasing sample opacity greatly improves image quality such that the limiting factor for a standard, inverted multi-photon microscope is determined by the working distance of the objective as opposed to detectable fluorescence. This method negates the need for mechanical sectioning of tissue and enables the routine generation of high-quality, quantitative image data that can significantly advance our understanding of tissue architecture and physiology.
[Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].
Jin, Yufei; Ma, Meng; Yang, Xin
2016-04-01
Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.
Influence of reconstruction algorithms on image quality in SPECT myocardial perfusion imaging.
Davidsson, Anette; Olsson, Eva; Engvall, Jan; Gustafsson, Agnetha
2017-11-01
We investigated if image- and diagnostic quality in SPECT MPI could be maintained despite a reduced acquisition time adding Depth Dependent Resolution Recovery (DDRR) for image reconstruction. Images were compared with filtered back projection (FBP) and iterative reconstruction using Ordered Subsets Expectation Maximization with (IRAC) and without (IRNC) attenuation correction (AC). Stress- and rest imaging for 15 min was performed on 21 subjects with a dual head gamma camera (Infinia Hawkeye; GE Healthcare), ECG-gating with 8 frames/cardiac cycle and a low-dose CT-scan. A 9 min acquisition was generated using five instead of eight gated frames and was reconstructed with DDRR, with (IRACRR) and without AC (IRNCRR) as well as with FBP. Three experienced nuclear medicine specialists visually assessed anonymized images according to eight criteria on a four point scale, three related to image quality and five to diagnostic confidence. Statistical analysis was performed using Visual Grading Regression (VGR). Observer confidence in statements on image quality was highest for the images that were reconstructed using DDRR (P<0·01 compared to FBP). Iterative reconstruction without DDRR was not superior to FBP. Interobserver variability was significant for statements on image quality (P<0·05) but lower in the diagnostic statements on ischemia and scar. The confidence in assessing ischemia and scar was not different between the reconstruction techniques (P = n.s.). SPECT MPI collected in 9 min, reconstructed with DDRR and AC, produced better image quality than the standard procedure. The observers expressed the highest diagnostic confidence in the DDRR reconstruction. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Colometer: a real-time quality feedback system for screening colonoscopy.
Filip, Dobromir; Gao, Xuexin; Angulo-Rodríguez, Leticia; Mintchev, Martin P; Devlin, Shane M; Rostom, Alaa; Rosen, Wayne; Andrews, Christopher N
2012-08-28
To investigate the performance of a new software-based colonoscopy quality assessment system. The software-based system employs a novel image processing algorithm which detects the levels of image clarity, withdrawal velocity, and level of the bowel preparation in a real-time fashion from live video signal. Threshold levels of image blurriness and the withdrawal velocity below which the visualization could be considered adequate have initially been determined arbitrarily by review of sample colonoscopy videos by two experienced endoscopists. Subsequently, an overall colonoscopy quality rating was computed based on the percentage of the withdrawal time with adequate visualization (scored 1-5; 1, when the percentage was 1%-20%; 2, when the percentage was 21%-40%, etc.). In order to test the proposed velocity and blurriness thresholds, screening colonoscopy withdrawal videos from a specialized ambulatory colon cancer screening center were collected, automatically processed and rated. Quality ratings on the withdrawal were compared to the insertion in the same patients. Then, 3 experienced endoscopists reviewed the collected videos in a blinded fashion and rated the overall quality of each withdrawal (scored 1-5; 1, poor; 3, average; 5, excellent) based on 3 major aspects: image quality, colon preparation, and withdrawal velocity. The automated quality ratings were compared to the averaged endoscopist quality ratings using Spearman correlation coefficient. Fourteen screening colonoscopies were assessed. Adenomatous polyps were detected in 4/14 (29%) of the collected colonoscopy video samples. As a proof of concept, the Colometer software rated colonoscope withdrawal as having better visualization than the insertion in the 10 videos which did not have any polyps (average percent time with adequate visualization: 79% ± 5% for withdrawal and 50% ± 14% for insertion, P < 0.01). Withdrawal times during which no polyps were removed ranged from 4-12 min. The median quality rating from the automated system and the reviewers was 3.45 [interquartile range (IQR), 3.1-3.68] and 3.00 (IQR, 2.33-3.67) respectively for all colonoscopy video samples. The automated rating revealed a strong correlation with the reviewer's rating (ρ coefficient= 0.65, P = 0.01). There was good correlation of the automated overall quality rating and the mean endoscopist withdrawal speed rating (Spearman r coefficient= 0.59, P = 0.03). There was no correlation of automated overall quality rating with mean endoscopists image quality rating (Spearman r coefficient= 0.41, P = 0.15). The results from a novel automated real-time colonoscopy quality feedback system strongly agreed with the endoscopists' quality assessments. Further study is required to validate this approach.
Image quality classification for DR screening using deep learning.
FengLi Yu; Jing Sun; Annan Li; Jun Cheng; Cheng Wan; Jiang Liu
2017-07-01
The quality of input images significantly affects the outcome of automated diabetic retinopathy (DR) screening systems. Unlike the previous methods that only consider simple low-level features such as hand-crafted geometric and structural features, in this paper we propose a novel method for retinal image quality classification (IQC) that performs computational algorithms imitating the working of the human visual system. The proposed algorithm combines unsupervised features from saliency map and supervised features coming from convolutional neural networks (CNN), which are fed to an SVM to automatically detect high quality vs poor quality retinal fundus images. We demonstrate the superior performance of our proposed algorithm on a large retinal fundus image dataset and the method could achieve higher accuracy than other methods. Although retinal images are used in this study, the methodology is applicable to the image quality assessment and enhancement of other types of medical images.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Valous, Nektarios A; Drakakis, Konstantinos; Sun, Da-Wen
2010-10-01
The visual texture of pork ham slices reveals information about the different qualities and perceived image heterogeneity, which is encapsulated as spatial variations in geometry and spectral characteristics. Detrended Fluctuation Analysis (DFA) detects long-range correlations in nonstationary spatial sequences, by a self-similarity scaling exponent alpha. In the current work, the aim is to investigate the usefulness of alpha, using different colour channels (R, G, B, L*, a*, b*, H, S, V, and Grey), as a quantitative descriptor of visual texture in sliced ham surface patterns for the detection of long-range correlations in unidimensional spatial series of greyscale intensity pixel values at 0 degrees , 30 degrees , 45 degrees , 60 degrees , and 90 degrees rotations. Images were acquired from three qualities of pre-sliced pork ham, typically consumed in Ireland (200 slices per quality). Results indicated that the DFA approach can be used to characterize and quantify the textural appearance of the three ham qualities, for different image orientations, with a global scaling exponent. The spatial series extracted from the ham images display long-range dependence, indicating an average behaviour around 1/f-noise. Results indicate that alpha has a universal character in quantifying the visual texture of ham surface intensity patterns, with no considerable crossovers that alter the behaviour of the fluctuations. Fractal correlation properties can thus be a useful metric for capturing information embedded in the visual texture of hams. Copyright (c) 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
A method for improved visual landscape compatibility of mobile home park
Daniel R. Jones
1979-01-01
This paper is a description of a research effort directed to improving the visual image of mobile home parks in the landscape. The study is an application of existing methodologies for measuring scenic quality and visual landscape compatibility to an unsolved problem. The paper summarizes two major areas of investigation: regional location factors based on visual...
da Silva, Kassy Gomes; de Andrade, Carla; Sotomaior, Cristina Santos
2017-07-17
Presence of significant quantities of gas in the intestines may hinder a proper conduction of abdominal ultrasonography. In humans, preparatory techniques are used to solve this, but measures to avoid ultrasonographic complications due to intestinal gas in rabbits have not been reported. The objective of this study was to evaluate the influence of fasting and simethicone administered orally on the quality of ultrasonographic images of the gallbladder, kidneys, and jejunum in adult New Zealand White (NZW) rabbits. A total of 28 adult NZW rabbits were included in a crossover design study, involving four groups: F: fasting for 4-6 h before the examination; FS: fasting and application of simethicone (20 mg/kg, orally) 20 to 30 min before the examination; S: application of simethicone 20-30 min before the examination without fasting; and C: controls without fasting and no application of simethicone. Evaluation of the ultrasonographic images was done in terms of percentage of visualization of each organ and image quality using a 3-point scoring system (unacceptable, acceptable, or excellent). The kidneys and the gallbladder were visualized at an equal frequency in all groups, while the jejunum was visualized more frequently in the FS group. The image quality scores for gallbladder, right kidney, and left kidney was similar for all groups, but for the jejunum, a higher number of images with acceptable scores was found within the FS group.
A GPU-based mipmapping method for water surface visualization
NASA Astrophysics Data System (ADS)
Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan
2018-03-01
Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.
He, Longjun; Ming, Xing; Liu, Qian
2014-04-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.
Quantitative evaluation of 3D images produced from computer-generated holograms
NASA Astrophysics Data System (ADS)
Sheerin, David T.; Mason, Ian R.; Cameron, Colin D.; Payne, Douglas A.; Slinger, Christopher W.
1999-08-01
Advances in computing and optical modulation techniques now make it possible to anticipate the generation of near real- time, reconfigurable, high quality, three-dimensional images using holographic methods. Computer generated holography (CGH) is the only technique which holds promise of producing synthetic images having the full range of visual depth cues. These realistic images will be viewable by several users simultaneously, without the need for headtracking or special glasses. Such a data visualization tool will be key to speeding up the manufacture of new commercial and military equipment by negating the need for the production of physical 3D models in the design phase. DERA Malvern has been involved in designing and testing fixed CGH in order to understand the connection between the complexity of the CGH, the algorithms used to design them, the processes employed in their implementation and the quality of the images produced. This poster describes results from CGH containing up to 108 pixels. The methods used to evaluate the reconstructed images are discussed and quantitative measures of image fidelity made. An understanding of the effect of the various system parameters upon final image quality enables a study of the possible system trade-offs to be carried out. Such an understanding of CGH production and resulting image quality is key to effective implementation of a reconfigurable CGH system currently under development at DERA.
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Building large mosaics of confocal edomicroscopic images using visual servoing.
Rosa, Benoît; Erden, Mustafa Suphi; Vercauteren, Tom; Herman, Benoît; Szewczyk, Jérôme; Morel, Guillaume
2013-04-01
Probe-based confocal laser endomicroscopy provides real-time microscopic images of tissues contacted by a small probe that can be inserted in vivo through a minimally invasive access. Mosaicking consists in sweeping the probe in contact with a tissue to be imaged while collecting the video stream, and process the images to assemble them in a large mosaic. While most of the literature in this field has focused on image processing, little attention has been paid so far to the way the probe motion can be controlled. This is a crucial issue since the precision of the probe trajectory control drastically influences the quality of the final mosaic. Robotically controlled motion has the potential of providing enough precision to perform mosaicking. In this paper, we emphasize the difficulties of implementing such an approach. First, probe-tissue contacts generate deformations that prevent from properly controlling the image trajectory. Second, in the context of minimally invasive procedures targeted by our research, robotic devices are likely to exhibit limited quality of the distal probe motion control at the microscopic scale. To cope with these problems visual servoing from real-time endomicroscopic images is proposed in this paper. It is implemented on two different devices (a high-accuracy industrial robot and a prototype minimally invasive device). Experiments on different kinds of environments (printed paper and ex vivo tissues) show that the quality of the visually servoed probe motion is sufficient to build mosaics with minimal distortion in spite of disturbances.
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
2013-01-01
Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569
Bunck, Alexander C; Jüttner, Alena; Kröger, Jan Robert; Burg, Matthias C; Kugel, Harald; Niederstadt, Thomas; Tiemann, Klaus; Schnackenburg, Bernhard; Crelier, Gerard R; Heindel, Walter; Maintz, David
2012-09-01
4D phase contrast flow imaging is increasingly used to study the hemodynamics in various vascular territories and pathologies. The aim of this study was to assess the feasibility and validity of MRI based 4D phase contrast flow imaging for the evaluation of in-stent blood flow in 17 commonly used peripheral stents. 17 different peripheral stents were implanted into a MR compatible flow phantom. In-stent visibility, maximal velocity and flow visualization were assessed and estimates of in-stent patency obtained from 4D phase contrast flow data sets were compared to a conventional 3D contrast-enhanced magnetic resonance angiography (CE-MRA) as well as 2D PC flow measurements. In all but 3 of the tested stents time-resolved 3D particle traces could be visualized inside the stent lumen. Quality of 4D flow visualization and CE-MRA images depended on stent type and stent orientation relative to the magnetic field. Compared to the visible lumen area determined by 3D CE-MRA, estimates of lumen patency derived from 4D flow measurements were significantly higher and less dependent on stent type. A higher number of stents could be assessed for in-stent patency by 4D phase contrast flow imaging (n=14) than by 2D phase contrast flow imaging (n=10). 4D phase contrast flow imaging in peripheral vascular stents is feasible and appears advantageous over conventional 3D contrast-enhanced MR angiography and 2D phase contrast flow imaging. It allows for in-stent flow visualization and flow quantification with varying quality depending on stent type. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
A CNN based neurobiology inspired approach for retinal image quality assessment.
Mahapatra, Dwarikanath; Roy, Pallab K; Sedai, Suman; Garnavi, Rahil
2016-08-01
Retinal image quality assessment (IQA) algorithms use different hand crafted features for training classifiers without considering the working of the human visual system (HVS) which plays an important role in IQA. We propose a convolutional neural network (CNN) based approach that determines image quality using the underlying principles behind the working of the HVS. CNNs provide a principled approach to feature learning and hence higher accuracy in decision making. Experimental results demonstrate the superior performance of our proposed algorithm over competing methods.
Overcoming Presbyopia by Manipulating the Eyes' Optics
NASA Astrophysics Data System (ADS)
Zheleznyak, Leonard A.
Presbyopia, the age-related loss of accommodation, is a visual condition affecting all adults over the age of 45 years. In presbyopia, individuals lose the ability to focus on nearby objects, due to a lifelong growth and stiffening of the eye's crystalline lens. This leads to poor near visual performance and affects patients' quality of life. The objective of this thesis is aimed towards the correction of presbyopia and can be divided into four aims. First, we examined the characteristics and limitations of currently available strategies for the correction of presbyopia. A natural-view wavefront sensor was used to objectively measure the accommodative ability of patients implanted with an accommodative intraocular lens (IOL). Although these patients had little accommodative ability based on changes in power, pupil miosis and higher order aberrations led to an improvement in through-focus retinal image quality in some cases. To quantify the through-focus retinal image quality of accommodative and multifocal IOLs directly, an adaptive optics (AO) IOL metrology system was developed. Using this system, the impact of corneal aberrations in regard to presbyopia-correcting IOLs was assessed, providing an objective measure of through-focus retinal image quality and practical guidelines for patient selection. To improve upon existing multifocal designs, we investigated retinal image quality metrics for the prediction of through-focus visual performance. The preferred metric was based on the fidelity of an image convolved with an aberrated point spread function. Using this metric, we investigated the potential of higher order aberrations and pupil amplitude apodization to increase the depth of focus of the presbyopic eye. Thirdly, we investigated modified monovision, a novel binocular approach to presbyopia correction using a binocular AO vision simulator. In modified monovision, different magnitudes of defocus and spherical aberration are introduced to each eye, thereby taking advantage of the binocular visual system. Several experiments using the binocular AO vision simulator found modified monovision led to significant improvements in through-focus visual performance, binocular summation and stereoacuity, as compared to traditional monovision. Finally, we addressed neural factors, affecting visual performance in modified monovision, such as ocular dominance and neural plasticity. We found that pairing modified monovision with a vision training regimen may further improve visual performance beyond the limits set by optics via neural plasticity. This opens the door to an exciting new avenue of vision correction to accompany optical interventions. The research presented in this thesis offers important guidelines for the clinical and scientific communities. Furthermore, the techniques described herein may be applied to other fields of ophthalmology, such as childhood myopia progression.
Quality assessment for color reproduction using a blind metric
NASA Astrophysics Data System (ADS)
Bringier, B.; Quintard, L.; Larabi, M.-C.
2007-01-01
This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
Image quality assessment for CT used on small animals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cisneros, Isabela Paredes, E-mail: iparedesc@unal.edu.co; Agulles-Pedrós, Luis, E-mail: lagullesp@unal.edu.co
Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters usingmore » an acrylic phantom and then, using the computational tool MATLAB, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.« less
Image quality assessment for CT used on small animals
NASA Astrophysics Data System (ADS)
Cisneros, Isabela Paredes; Agulles-Pedrós, Luis
2016-07-01
Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters using an acrylic phantom and then, using the computational tool MatLab, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.
Dosimetry and image quality assessment in a direct radiography system
Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro
2014-01-01
Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119
Clinical evaluation of watermarked medical images.
Zain, Jasni M; Fauzi, Abdul M; Aziz, Azian A
2006-01-01
Digital watermarking medical images provides security to the images. The purpose of this study was to see whether digitally watermarked images changed clinical diagnoses when assessed by radiologists. We embedded 256 bits watermark to various medical images in the region of non-interest (RONI) and 480K bits in both region of interest (ROI) and RONI. Our results showed that watermarking medical images did not alter clinical diagnoses. In addition, there was no difference in image quality when visually assessed by the medical radiologists. We therefore concluded that digital watermarking medical images were safe in terms of preserving image quality for clinical purposes.
DCTune Perceptual Optimization of Compressed Dental X-Rays
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1996-01-01
In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.
Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images
NASA Astrophysics Data System (ADS)
Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk
2007-02-01
The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.
Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.
Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K
2009-06-01
The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.
Visual Contrast Enhancement Algorithm Based on Histogram Equalization
Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching
2015-01-01
Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219
Content-based multiple bitstream image transmission over noisy channels.
Cao, Lei; Chen, Chang Wen
2002-01-01
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
Sekine, Tetsuro; Amano, Yasuo; Takagi, Ryo; Matsumura, Yoshio; Murai, Yasuo; Kumita, Shinichiro
2014-01-01
A drawback of time-resolved 3-dimensional phase contrast magnetic resonance (4D Flow MR) imaging is its lengthy scan time for clinical application in the brain. We assessed the feasibility for flow measurement and visualization of 4D Flow MR imaging using Cartesian y-z radial sampling and that using k-t sensitivity encoding (k-t SENSE) by comparison with the standard scan using SENSE. Sixteen volunteers underwent 3 types of 4D Flow MR imaging of the brain using a 3.0-tesla scanner. As the standard scan, 4D Flow MR imaging with SENSE was performed first and then followed by 2 types of acceleration scan-with Cartesian y-z radial sampling and with k-t SENSE. We measured peak systolic velocity (PSV) and blood flow volume (BFV) in 9 arteries, and the percentage of particles arriving from the emitter plane at the target plane in 3 arteries, visually graded image quality in 9 arteries, and compared these quantitative and visual data between the standard scan and each acceleration scan. 4D Flow MR imaging examinations were completed in all but one volunteer, who did not undergo the last examination because of headache. Each acceleration scan reduced scan time by 50% compared with the standard scan. The k-t SENSE imaging underestimated PSV and BFV (P < 0.05). There were significant correlations for PSV and BFV between the standard scan and each acceleration scan (P < 0.01). The percentage of particles reaching the target plane did not differ between the standard scan and each acceleration scan. For visual assessment, y-z radial sampling deteriorated the image quality of the 3 arteries. Cartesian y-z radial sampling is feasible for measuring flow, and k-t SENSE offers sufficient flow visualization; both allow acquisition of 4D Flow MR imaging with shorter scan time.
Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L
2018-07-01
To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
An overview of state-of-the-art image restoration in electron microscopy.
Roels, J; Aelterman, J; Luong, H Q; Lippens, S; Pižurica, A; Saeys, Y; Philips, W
2018-06-08
In Life Science research, electron microscopy (EM) is an essential tool for morphological analysis at the subcellular level as it allows for visualization at nanometer resolution. However, electron micrographs contain image degradations such as noise and blur caused by electromagnetic interference, electron counting errors, magnetic lens imperfections, electron diffraction, etc. These imperfections in raw image quality are inevitable and hamper subsequent image analysis and visualization. In an effort to mitigate these artefacts, many electron microscopy image restoration algorithms have been proposed in the last years. Most of these methods rely on generic assumptions on the image or degradations and are therefore outperformed by advanced methods that are based on more accurate models. Ideally, a method will accurately model the specific degradations that fit the physical acquisition settings. In this overview paper, we discuss different electron microscopy image degradation solutions and demonstrate that dedicated artefact regularisation results in higher quality restoration and is applicable through recently developed probabilistic methods. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Convolutional auto-encoder for image denoising of ultra-low-dose CT.
Nishio, Mizuho; Nagashima, Chihiro; Hirabayashi, Saori; Ohnishi, Akinori; Sasaki, Kaori; Sagawa, Tomoyuki; Hamada, Masayuki; Yamashita, Tatsuo
2017-08-01
The purpose of this study was to validate a patch-based image denoising method for ultra-low-dose CT images. Neural network with convolutional auto-encoder and pairs of standard-dose CT and ultra-low-dose CT image patches were used for image denoising. The performance of the proposed method was measured by using a chest phantom. Standard-dose and ultra-low-dose CT images of the chest phantom were acquired. The tube currents for standard-dose and ultra-low-dose CT were 300 and 10 mA, respectively. Ultra-low-dose CT images were denoised with our proposed method using neural network, large-scale nonlocal mean, and block-matching and 3D filtering. Five radiologists and three technologists assessed the denoised ultra-low-dose CT images visually and recorded their subjective impressions of streak artifacts, noise other than streak artifacts, visualization of pulmonary vessels, and overall image quality. For the streak artifacts, noise other than streak artifacts, and visualization of pulmonary vessels, the results of our proposed method were statistically better than those of block-matching and 3D filtering (p-values < 0.05). On the other hand, the difference in the overall image quality between our proposed method and block-matching and 3D filtering was not statistically significant (p-value = 0.07272). The p-values obtained between our proposed method and large-scale nonlocal mean were all less than 0.05. Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering.
Digital radiography: optimization of image quality and dose using multi-frequency software.
Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D
2012-09-01
New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.
Ors, Suna; Inci, Ercan; Turkay, Rustu; Kokurcan, Atilla; Hocaoglu, Elif
2017-12-01
To compare efficancy of three-dimentional SPACE (sampling perfection with application-optimized contrasts using different flip-angle evolutions) and CISS (constructive interference in steady state) sequences in the imaging of the cisternal segments of cranial nerves V-XII. Temporal MRI scans from 50 patients (F:M ratio, 27:23; mean age, 44.5±15.9 years) admitted to our hospital with vertigo, tinnitus, and hearing loss were retrospectively analyzed. All patients had both CISS and SPACE sequences. Quantitative analysis of SPACE and CISS sequences was performed by measuring the ventricle-to-parenchyma contrast-to-noise ratio (CNR). Qualitative analysis of differences in visualization capability, image quality, and severity of artifacts was also conducted. A score ranging 'no artefact' to 'severe artefacts and unreadable' was used for the assessment of artifacts and from 'not visualized' to 'completely visualized' for the assesment of image quality, respectively. The distribution of variables was controlled by the Kolmogorov-Smirnov test. Samples t-test and McNemar's test were used to determine statistical significance. Rates of visualization of posterior fossa cranial nerves in cases of complete visualization were as follows: nerve V (100% for both sequences), nerve VI (94% in SPACE, 86% in CISS sequences), nerves VII-VIII (100% for both sequences), IX-XI nerve complex (96%, 88%); nerve XII (58%, 46%) (p<0.05). SPACE sequences showed fewer artifacts than CISS sequences (p<0.002). Copyright © 2017 Elsevier B.V. All rights reserved.
Composition of a dewarped and enhanced document image from two view images.
Koo, Hyung Il; Kim, Jinho; Cho, Nam Ik
2009-07-01
In this paper, we propose an algorithm to compose a geometrically dewarped and visually enhanced image from two document images taken by a digital camera at different angles. Unlike the conventional works that require special equipment or assumptions on the contents of books or complicated image acquisition steps, we estimate the unfolded book or document surface from the corresponding points between two images. For this purpose, the surface and camera matrices are estimated using structure reconstruction, 3-D projection analysis, and random sample consensus-based curve fitting with the cylindrical surface model. Because we do not need any assumption on the contents of books, the proposed method can be applied not only to optical character recognition (OCR), but also to the high-quality digitization of pictures in documents. In addition to the dewarping for a structurally better image, image mosaic is also performed for further improving the visual quality. By finding better parts of images (with less out of focus blur and/or without specular reflections) from either of views, we compose a better image by stitching and blending them. These processes are formulated as energy minimization problems that can be solved using a graph cut method. Experiments on many kinds of book or document images show that the proposed algorithm robustly works and yields visually pleasing results. Also, the OCR rate of the resulting image is comparable to that of document images from a flatbed scanner.
Shimizu, Hironori; Isoda, Hiroyoshi; Ohno, Tsuyoshi; Yamashita, Rikiya; Kawahara, Seiya; Furuta, Akihiro; Fujimoto, Koji; Kido, Aki; Kusahara, Hiroshi; Togashi, Kaori
2015-01-01
To compare and evaluate images of non-contrast enhanced magnetic resonance (MR) portography and hepatic venography acquired with two different fat suppression methods, the chemical shift selective (CHESS) method and short tau inversion recovery (STIR) method. Twenty-two healthy volunteers were examined using respiratory-triggered three-dimensional true steady-state free-precession with two time-spatial labeling inversion pulses. The CHESS or STIR methods were used for fat suppression. The relative signal-to-noise ratio and contrast-to-noise ratio (CNR) were quantified, and the quality of visualization was scored. Image acquisition was successfully conducted in all volunteers. The STIR method significantly improved the CNRs of MR portography and hepatic venography. The image quality scores of main portal vein and right portal vein were higher with the STIR method, but there were no significant differences. The image quality scores of right hepatic vein, middle hepatic vein, and left hepatic vein (LHV) were all higher, and the visualization of LHV was significantly better (p<0.05). The STIR method contributes to further suppression of the background signal and improves visualization of the portal and hepatic veins. The results support using non-contrast-enhanced MR portography and hepatic venography in clinical practice. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment
NASA Astrophysics Data System (ADS)
Soliz, P.; Larichev, A.; Zamora, G.; Murillo, S.; Barriga, E. S.
2010-02-01
Researchers have sought to gain greater insight into the mechanisms of the retina and the optic disc at high spatial resolutions that would enable the visualization of small structures such as photoreceptors and nerve fiber bundles. The sources of retinal image quality degradation are aberrations within the human eye, which limit the achievable resolution and the contrast of small image details. To overcome these fundamental limitations, researchers have been applying adaptive optics (AO) techniques to correct for the aberrations. Today, deformable mirror based adaptive optics devices have been developed to overcome the limitations of standard fundus cameras, but at prices that are typically unaffordable for most clinics. In this paper we demonstrate a clinically viable fundus camera with auto-focus and astigmatism correction that is easy to use and has improved resolution. We have shown that removal of low-order aberrations results in significantly better resolution and quality images. Additionally, through the application of image restoration and super-resolution techniques, the images present considerably improved quality. The improvements lead to enhanced visualization of retinal structures associated with pathology.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
Denoising and 4D visualization of OCT images
Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.
2009-01-01
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509
Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley
2014-01-01
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schafer, S.; Nithiananthan, S.; Mirota, D. J.
Purpose: A flat-panel detector based mobile isocentric C-arm for cone-beam CT (CBCT) has been developed to allow intraoperative 3D imaging with sub-millimeter spatial resolution and soft-tissue visibility. Image quality and radiation dose were evaluated in spinal surgery, commonly relying on lower-performance image intensifier based mobile C-arms. Scan protocols were developed for task-specific imaging at minimum dose, in-room exposure was evaluated, and integration of the imaging system with a surgical guidance system was demonstrated in preclinical studies of minimally invasive spine surgery. Methods: Radiation dose was assessed as a function of kilovolt (peak) (80-120 kVp) and milliampere second using thoracic andmore » lumbar spine dosimetry phantoms. In-room radiation exposure was measured throughout the operating room for various CBCT scan protocols. Image quality was assessed using tissue-equivalent inserts in chest and abdomen phantoms to evaluate bone and soft-tissue contrast-to-noise ratio as a function of dose, and task-specific protocols (i.e., visualization of bone or soft-tissues) were defined. Results were applied in preclinical studies using a cadaveric torso simulating minimally invasive, transpedicular surgery. Results: Task-specific CBCT protocols identified include: thoracic bone visualization (100 kVp; 60 mAs; 1.8 mGy); lumbar bone visualization (100 kVp; 130 mAs; 3.2 mGy); thoracic soft-tissue visualization (100 kVp; 230 mAs; 4.3 mGy); and lumbar soft-tissue visualization (120 kVp; 460 mAs; 10.6 mGy) - each at (0.3 x 0.3 x 0.9 mm{sup 3}) voxel size. Alternative lower-dose, lower-resolution soft-tissue visualization protocols were identified (100 kVp; 230 mAs; 5.1 mGy) for the lumbar region at (0.3 x 0.3 x 1.5 mm{sup 3}) voxel size. Half-scan orbit of the C-arm (x-ray tube traversing under the table) was dosimetrically advantageous (prepatient attenuation) with a nonuniform dose distribution ({approx}2 x higher at the entrance side than at isocenter, and {approx}3-4 lower at the exit side). The in-room dose (microsievert) per unit scan dose (milligray) ranged from {approx}21 {mu}Sv/mGy on average at tableside to {approx}0.1 {mu}Sv/mGy at 2.0 m distance to isocenter. All protocols involve surgical staff stepping behind a shield wall for each CBCT scan, therefore imparting {approx}zero dose to staff. Protocol implementation in preclinical cadaveric studies demonstrate integration of the C-arm with a navigation system for spine surgery guidance-specifically, minimally invasive vertebroplasty in which the system provided accurate guidance and visualization of needle placement and bone cement distribution. Cumulative dose including multiple intraoperative scans was {approx}11.5 mGy for CBCT-guided thoracic vertebroplasty and {approx}23.2 mGy for lumbar vertebroplasty, with dose to staff at tableside reduced to {approx}1 min of fluoroscopy time ({approx}40-60 {mu}Sv), compared to 5-11 min for the conventional approach. Conclusions: Intraoperative CBCT using a high-performance mobile C-arm prototype demonstrates image quality suitable to guidance of spine surgery, with task-specific protocols providing an important basis for minimizing radiation dose, while maintaining image quality sufficient for surgical guidance. Images demonstrate a significant advance in spatial resolution and soft-tissue visibility, and CBCT guidance offers the potential to reduce fluoroscopy reliance, reducing cumulative dose to patient and staff. Integration with a surgical guidance system demonstrates precise tracking and visualization in up-to-date images (alleviating reliance on preoperative images only), including detection of errors or suboptimal surgical outcomes in the operating room.« less
A strategy to optimize CT pediatric dose with a visual discrimination model
NASA Astrophysics Data System (ADS)
Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.
2008-03-01
Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.
Using CNN Features to Better Understand What Makes Visual Artworks Special.
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond ( richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature.
Using CNN Features to Better Understand What Makes Visual Artworks Special
Brachmann, Anselm; Barth, Erhardt; Redies, Christoph
2017-01-01
One of the goal of computational aesthetics is to understand what is special about visual artworks. By analyzing image statistics, contemporary methods in computer vision enable researchers to identify properties that distinguish artworks from other (non-art) types of images. Such knowledge will eventually allow inferences with regard to the possible neural mechanisms that underlie aesthetic perception in the human visual system. In the present study, we define measures that capture variances of features of a well-established Convolutional Neural Network (CNN), which was trained on millions of images to recognize objects. Using an image dataset that represents traditional Western, Islamic and Chinese art, as well as various types of non-art images, we show that we need only two variance measures to distinguish between the artworks and non-art images with a high classification accuracy of 93.0%. Results for the first variance measure imply that, in the artworks, the subregions of an image tend to be filled with pictorial elements, to which many diverse CNN features respond (richness of feature responses). Results for the second measure imply that this diversity is tied to a relatively large variability of the responses of individual CNN feature across the subregions of an image. We hypothesize that this combination of richness and variability of CNN feature responses is one of properties that makes traditional visual artworks special. We discuss the possible neural underpinnings of this perceptual quality of artworks and propose to study the same quality also in other types of aesthetic stimuli, such as music and literature. PMID:28588537
Hastings, Gareth D.; Marsack, Jason D.; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A.
2017-01-01
Purpose To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Methods Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. Results For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ±SD was −0.06 ±0.04 with both refractions; dilated was −0.05 ±0.04 with the objective, and −0.05 ±0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. Conclusions A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred over subjective refraction. Subjective refraction was preferred by habitually undercorrected hyperopic eyes. PMID:28370389
Quality assessment of digital X-ray chest images using an anthropomorphic chest phantom
NASA Astrophysics Data System (ADS)
Vodovatov, A. V.; Kamishanskaya, I. G.; Drozdov, A. A.; Bernhardsson, C.
2017-02-01
The current study is focused on determining the optimal tube voltage for the conventional X-ray digital chest screening examinations, using a visual grading analysis method. Chest images of an anthropomorphic phantom were acquired in posterior-anterior projection on four digital X-ray units with different detector types. X-ray images obtained with an anthropomorphic phantom were accepted by the radiologists as corresponding to a normal human anatomy, hence allowing using phantoms in image quality trials without limitations.
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Visual Communications And Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell; Tzou, Kou-Hu
1989-07-01
This special issue on Visual Communications and Image Processing contains 14 papers that cover a wide spectrum in this fast growing area. For the past few decades, researchers and scientists have devoted their efforts to these fields. Through this long-lasting devotion, we witness today the growing popularity of low-bit-rate video as a convenient tool for visual communication. We also see the integration of high-quality video into broadband digital networks. Today, with more sophisticated processing, clearer and sharper pictures are being restored from blurring and noise. Also, thanks to the advances in digital image processing, even a PC-based system can be built to recognize highly complicated Chinese characters at the speed of 300 characters per minute. This special issue can be viewed as a milestone of visual communications and image processing on its journey to eternity. It presents some overviews on advanced topics as well as some new development in specific subjects.
Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvis, Lesley A., E-mail: Lesley.a.jarvis@hitchcock.org; Norris Cotton Cancer Center at the Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire; Zhang, Rongxiao
Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans,more » mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.« less
Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.
Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong
2016-04-01
Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.
Chavan, Satishkumar S; Mahajan, Abhishek; Talbar, Sanjay N; Desai, Subhash; Thakur, Meenakshi; D'cruz, Anil
2017-02-01
Neurocysticercosis (NCC) is a parasite infection caused by the tapeworm Taenia solium in its larvae stage which affects the central nervous system of the human body (a definite host). It results in the formation of multiple lesions in the brain at different locations during its various stages. During diagnosis of such symptomatic patients, these lesions can be better visualized using a feature based fusion of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). This paper presents a novel approach to Multimodality Medical Image Fusion (MMIF) used for the analysis of the lesions for the diagnostic purpose and post treatment review of NCC. The MMIF presented here is a technique of combining CT and MRI data of the same patient into a new slice using a Nonsubsampled Rotated Complex Wavelet Transform (NSRCxWT). The forward NSRCxWT is applied on both the source modalities separately to extract the complementary and the edge related features. These features are then combined to form a composite spectral plane using average and maximum value selection fusion rules. The inverse transformation on this composite plane results into a new, visually better, and enriched fused image. The proposed technique is tested on the pilot study data sets of patients infected with NCC. The quality of these fused images is measured using objective and subjective evaluation metrics. Objective evaluation is performed by estimating the fusion parameters like entropy, fusion factor, image quality index, edge quality measure, mean structural similarity index measure, etc. The fused images are also evaluated for their visual quality using subjective analysis with the help of three expert radiologists. The experimental results on 43 image data sets of 17 patients are promising and superior when compared with the state of the art wavelet based fusion algorithms. The proposed algorithm can be a part of computer-aided detection and diagnosis (CADD) system which assists the radiologists in clinical practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2015-10-01
Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.
Vlek, S L; van Dam, D A; Rubinstein, S M; de Lange-de Klerk, E S M; Schoonmade, L J; Tuynman, J B; Meijerink, W J H J; Ankersmit, M
2017-07-01
Near-infrared imaging with indocyanine green (ICG) has been extensively investigated during laparoscopic cholecystectomy (LC). However, methods vary between studies, especially regarding patient selection, dosage and timing. The aim of this systematic review was to evaluate the potential of the near-infrared imaging technique with ICG to identify biliary structures during LC. A comprehensive systematic literature search was performed. Prospective trials examining the use of ICG during LC were included. Primary outcome was biliary tract visualization. Risk of bias was assessed using ROBINS-I. Secondly, a meta-analysis was performed comparing ICG to intraoperative cholangiography (IOC) for identification of biliary structures. GRADE was used to assess the quality of the evidence. Nineteen studies were included. Based upon the pooled data from 13 studies, cystic duct (Lusch et al. in J Endourol 28:261-266, 2014) visualization was 86.5% (95% CI 71.2-96.6%) prior to dissection of Calot's triangle with a 2.5-mg dosage of ICG and 96.5% (95% CI 93.9-98.4%) after dissection. The results were not appreciably different when the dosage was based upon bodyweight. There is moderate quality evidence that the CD is more frequently visualized using ICG than IOC (RR 1.16; 95% CI 1.00-1.35); however, this difference was not statistically significant. This systematic review provides equal results for biliary tract visualization with near-infrared imaging with ICG during LC compared to IOC. Near-infrared imaging with ICG has the potential to replace IOC for biliary mapping. However, methods of near-infrared imaging with ICG vary. Future research is necessary for optimization and standardization of the near-infrared ICG technique.
Tamada, Tsutomu; Ream, Justin M; Doshi, Ankur M; Taneja, Samir S; Rosenkrantz, Andrew B
The purpose of this study was to compare image quality and tumor assessment at prostate magnetic resonance imaging (MRI) between reduced field-of-view diffusion-weighted imaging (rFOV-DWI) and standard DWI (st-DWI). A total of 49 patients undergoing prostate MRI and MRI/ultrasound fusion-targeted biopsy were included. Examinations included st-DWI (field of view [FOV], 200 × 200 mm) and rFOV-DWI (FOV, 140 × 64 mm) using a 2-dimensional (2D) spatially-selective radiofrequency pulse and parallel transmission. Two readers performed qualitative assessments; a third reader performed quantitative evaluation. Overall image quality, anatomic distortion, visualization of capsule, and visualization of peripheral/transition zone edge were better for rFOV-DWI for reader 1 (P ≤ 0.002), although not for reader 2 (P ≥ 0.567). For both readers, sensitivity, specificity, and accuracy for tumor with a Gleason Score (GS) of 3 + 4 or higher were not different (P ≥ 0.289). Lesion clarity was higher for st-DWI for reader 2 (P = 0.008), although similar for reader 1 (P = 0.409). Diagnostic confidence was not different for either reader (P ≥ 0.052). Tumor-to-benign apparent diffusion coefficient ratio was not different (P = 0.675). Potentially improved image quality of rFOV-DWI did not yield improved tumor assessment. Continued optimization is warranted.
Impact of audio/visual systems on pediatric sedation in magnetic resonance imaging.
Lemaire, Colette; Moran, Gerald R; Swan, Hans
2009-09-01
To evaluate the use of an audio/visual (A/V) system in pediatric patients as an alternative to sedation in magnetic resonance imaging (MRI) in terms of wait times, image quality, and patient experience. Pediatric MRI examinations from April 8 to August 11, 2008 were compared to those 1 year prior to the installation of the A/V system. Data collected included age, requisition receive date, scan date, and whether sedation was used. A posttest questionnaire was used to evaluate patient experience. Image quality was assessed by two radiologists. Over the 4 months in 2008 there was an increase of 7.2% (115; P < 0.05) of pediatric patients scanned and a decrease of 15.4%, (67; P = 0.32) requiring sedation. The average sedation wait time decreased by 33% (5.8 months) (P < 0.05). Overall, the most positively affected group was the 4-10 years. The questionnaire resulted in 84% of participants expressing a positive reaction to the A/V system. Radiological evaluation revealed no changes in image quality between A/V users and sedates. The A/V system was a successful method to reduce patient motion and obtain a quality diagnostic MRI without the use of sedation in pediatric patients. It provided a safer option, a positive experience, and decreased wait times.
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
Visual difference metric for realistic image synthesis
NASA Astrophysics Data System (ADS)
Bolin, Mark R.; Meyer, Gary W.
1999-05-01
An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.
A Bayesian Nonparametric Approach to Image Super-Resolution.
Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid
2015-02-01
Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Ogiela, Marek R.
2012-10-01
The proposed framework for cognitive analysis of perfusion computed tomography images is a fusion of image processing, pattern recognition, and image analysis procedures. The output data of the algorithm consists of: regions of perfusion abnormalities, anatomy atlas description of brain tissues, measures of perfusion parameters, and prognosis for infracted tissues. That information is superimposed onto volumetric computed tomography data and displayed to radiologists. Our rendering algorithm enables rendering large volumes on off-the-shelf hardware. This portability of rendering solution is very important because our framework can be run without using expensive dedicated hardware. The other important factors are theoretically unlimited size of rendered volume and possibility of trading of image quality for rendering speed. Such rendered, high quality visualizations may be further used for intelligent brain perfusion abnormality identification, and computer aided-diagnosis of selected types of pathologies.
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.
Shaheen, Anjuman; Rajpoot, Kashif
2015-08-01
Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Power, Alyssa; Poonja, Sabrina; Disler, Dal; Myers, Kimberley; Patton, David J; Mah, Jean K; Fine, Nowell M; Greenway, Steven C
2017-01-01
Advances in medical care for patients with Duchenne muscular dystrophy (DMD) have resulted in improved survival and an increased prevalence of cardiomyopathy. Serial echocardiographic surveillance is recommended to detect early cardiac dysfunction and initiate medical therapy. Clinical anecdote suggests that echocardiographic quality diminishes over time, impeding accurate assessment of left ventricular systolic function. Furthermore, evidence-based guidelines for the use of cardiac imaging in DMD, including cardiac magnetic resonance imaging (CMR), are limited. The objective of our single-center, retrospective study was to quantify the deterioration in echocardiographic image quality with increasing patient age and identify an age at which CMR should be considered. We retrospectively reviewed and graded the image quality of serial echocardiograms obtained in young patients with DMD. The quality of 16 left ventricular segments in two echocardiographic views was visually graded using a binary scoring system. An endocardial border delineation percentage (EBDP) score was calculated by dividing the number of segments with adequate endocardial delineation in each imaging window by the total number of segments present in that window and multiplying by 100. Linear regression analysis was performed to model the relationship between the EBDP scores and patient age. Fifty-five echocardiograms from 13 patients (mean age 11.6 years, range 3.6-19.9) were systematically reviewed. By 13 years of age, 50% of the echocardiograms were classified as suboptimal with ≥30% of segments inadequately visualized, and by 15 years of age, 78% of studies were suboptimal. Linear regression analysis revealed a negative correlation between patient age and EBDP score ( r = -2.49, 95% confidence intervals -4.73, -0.25; p = 0.032), with the score decreasing by 2.5% for each 1 year increase in age. Echocardiographic image quality declines with increasing age in DMD. Alternate imaging modalities may play a role in cases of poor echocardiographic image quality.
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
How does c-view image quality compare with conventional 2D FFDM?
Nelson, Jeffrey S; Wells, Jered R; Baker, Jay A; Samei, Ehsan
2016-05-01
The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to compare the intrinsic image quality of synthesized 2D c-view and 2D FFDM images in terms of resolution, contrast, and noise. Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than c-view according to both the average observer and automated scores. In addition, between 50% and 70% of c-view images failed to meet the nominal minimum ACR accreditation requirements-primarily due to fiber breaks. Software analysis demonstrated that c-view provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the c-view image (11 lp/mm FFDM, 5 lp/mm c-view) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with c-view. Whereas the FFDM image contained approximately white noise texture, the c-view image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Their analysis demonstrates many instances where the c-view image quality differs from FFDM. Compared to FFDM, c-view offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of c-view images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + c-view performs relative to DBT + FFDM or FFDM alone.
NASA Astrophysics Data System (ADS)
Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin
2017-07-01
This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.
The footprints of visual attention in the Posner cueing paradigm revealed by classification images
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Shimozaki, Steven S.; Abbey, Craig K.
2002-01-01
In the Posner cueing paradigm, observers' performance in detecting a target is typically better in trials in which the target is present at the cued location than in trials in which the target appears at the uncued location. This effect can be explained in terms of a Bayesian observer where visual attention simply weights the information differently at the cued (attended) and uncued (unattended) locations without a change in the quality of processing at each location. Alternatively, it could also be explained in terms of visual attention changing the shape of the perceptual filter at the cued location. In this study, we use the classification image technique to compare the human perceptual filters at the cued and uncued locations in a contrast discrimination task. We did not find statistically significant differences between the shapes of the inferred perceptual filters across the two locations, nor did the observed differences account for the measured cueing effects in human observers. Instead, we found a difference in the magnitude of the classification images, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.
NASA Astrophysics Data System (ADS)
Rieder, Christian; Schwier, Michael; Weihusen, Andreas; Zidowitz, Stephan; Peitgen, Heinz-Otto
2009-02-01
Image guided radiofrequency ablation (RFA) is becoming a standard procedure as a minimally invasive method for tumor treatment in the clinical routine. The visualization of pathological tissue and potential risk structures like vessels or important organs gives essential support in image guided pre-interventional RFA planning. In this work our aim is to present novel visualization techniques for interactive RFA planning to support the physician with spatial information of pathological structures as well as the finding of trajectories without harming vitally important tissue. Furthermore, we illustrate three-dimensional applicator models of different manufactures combined with corresponding ablation areas in homogenous tissue, as specified by the manufacturers, to enhance the estimated amount of cell destruction caused by ablation. The visualization techniques are embedded in a workflow oriented application, designed for the use in the clinical routine. To allow a high-quality volume rendering we integrated a visualization method using the fuzzy c-means algorithm. This method automatically defines a transfer function for volume visualization of vessels without the need of a segmentation mask. However, insufficient visualization results of the displayed vessels caused by low data quality can be improved using local vessel segmentation in the vicinity of the lesion. We also provide an interactive segmentation technique of liver tumors for the volumetric measurement and for the visualization of pathological tissue combined with anatomical structures. In order to support coagulation estimation with respect to the heat-sink effect of the cooling blood flow which decreases thermal ablation, a numerical simulation of the heat distribution is provided.
McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R
2007-05-01
The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.
Creating Physical 3D Stereolithograph Models of Brain and Skull
Kelley, Daniel J.; Farhoud, Mohammed; Meyerand, M. Elizabeth; Nelson, David L.; Ramirez, Lincoln F.; Dempsey, Robert J.; Wolf, Alan J.; Alexander, Andrew L.; Davidson, Richard J.
2007-01-01
The human brain and skull are three dimensional (3D) anatomical structures with complex surfaces. However, medical images are often two dimensional (2D) and provide incomplete visualization of structural morphology. To overcome this loss in dimension, we developed and validated a freely available, semi-automated pathway to build 3D virtual reality (VR) and hand-held, stereolithograph models. To evaluate whether surface visualization in 3D was more informative than in 2D, undergraduate students (n = 50) used the Gillespie scale to rate 3D VR and physical models of both a living patient-volunteer's brain and the skull of Phineas Gage, a historically famous railroad worker whose misfortune with a projectile tamping iron provided the first evidence of a structure-function relationship in brain. Using our processing pathway, we successfully fabricated human brain and skull replicas and validated that the stereolithograph model preserved the scale of the VR model. Based on the Gillespie ratings, students indicated that the biological utility and quality of visual information at the surface of VR and stereolithograph models were greater than the 2D images from which they were derived. The method we developed is useful to create VR and stereolithograph 3D models from medical images and can be used to model hard or soft tissue in living or preserved specimens. Compared to 2D images, VR and stereolithograph models provide an extra dimension that enhances both the quality of visual information and utility of surface visualization in neuroscience and medicine. PMID:17971879
NASA Astrophysics Data System (ADS)
Yang, Guiyan; Wang, Qingyan; Liu, Chen; Wang, Xiaobin; Fan, Shuxiang; Huang, Wenqian
2018-07-01
Rapid and visual detection of the chemical compositions of plant seeds is important but difficult for a traditional seed quality analysis system. In this study, a custom-designed line-scan Raman hyperspectral imaging system was applied for detecting and displaying the main chemical compositions in a heterogeneous maize seed. Raman hyperspectral images collected from the endosperm and embryo of maize seed were acquired and preprocessed by Savitzky-Golay (SG) filter and adaptive iteratively reweighted Penalized Least Squares (airPLS). Three varieties of maize seeds were analyzed, and the characteristics of the spectral and spatial information were extracted from each hyperspectral image. The Raman characteristic peaks, identified at 477, 1443, 1522, 1596 and 1654 cm-1 from 380 to 1800 cm-1 Raman spectra, were related to corn starch, mixture of oil and starch, zeaxanthin, lignin and oil in maize seeds, respectively. Each single-band image corresponding to the characteristic band characterized the spatial distribution of the chemical composition in a seed successfully. The embryo was distinguished from the endosperm by band operation of the single-band images at 477, 1443, and 1596 cm-1 for each variety. Results showed that Raman hyperspectral imaging system could be used for on-line quality control of maize seeds based on the rapid and visual detection of the chemical compositions in maize seeds.
Rapid sequence magnetic resonance imaging in the assessment of children with hydrocephalus.
O'Neill, Brent R; Pruthi, Sumit; Bains, Harmanjeet; Robison, Ryan; Weir, Keiko; Ojemann, Jeff; Ellenbogen, Richard; Avellino, Anthony; Browd, Samuel R
2013-12-01
Recent reports have shown the utility of rapid-acquisition magnetic resonance imaging (MRI) in the evaluation of children with hydrocephalus. Rapid sequence MRI (RS-MRI) acquires clinically useful images in seconds without exposing children to the risks of ionizing radiation or sedation. We review our experience with RS-MRI in children with shunts. Overall image quality, cost, catheter visualization, motion artifact, and ventricular size were reviewed for all RS-MRI studies obtained at Seattle Children's Hospital during a 2-year period. Image acquisition time was 12-19 seconds, with sessions usually lasting less than 3 minutes. Image quality was very good or excellent in 94% of studies, whereas only one was graded as poor. Significant motion artifact was noted in 7%, whereas 77% had little or no motion artifact. Catheter visualization was good or excellent in 57%, poor in 36%, and misleading in 7%. Small ventricular size was correlated with poor catheter visualization (Spearman's ρ = 0.586; P < 0.00001). RS-MRI imaging cost ∼$650 more than conventional computed tomography (CT). Our study supports that RS-MRI is an adequate substitute that allows reduced use of CT imaging and resultant exposure to ionizing radiation. Catheter position visualization remains suboptimal when ventricles are small, but shunt malfunction can be adequately determined in most cases. The cost is significantly more than CT, but the potential for lifetime reduction in radiation exposure may justify this expense in children. Limitations include the risk of valve malfunction after repeated exposure to high magnetic fields and the need for reprogramming with many types of adjustable valves. Copyright © 2013 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Cooking loss (CL) is a critical quality attribute directly relating to meat juiciness. The potential of the hyperspectral imaging (HSI) technique was investigated for non-invasively classifying and visualizing the CL of fresh broiler breast meat. Hyperspectral images of total 75 fresh broiler breast...
On-Chip Imaging of Schistosoma haematobium Eggs in Urine for Diagnosis by Computer Vision
Linder, Ewert; Grote, Anne; Varjo, Sami; Linder, Nina; Lebbad, Marianne; Lundin, Mikael; Diwan, Vinod; Hannuksela, Jari; Lundin, Johan
2013-01-01
Background Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive “mini-microscopes”. Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. Methods/Principal Findings Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. Conclusions/Significance As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases. PMID:24340107
Are New Image Quality Figures of Merit Needed for Flat Panel Displays?
1998-06-01
American National Standard for Human Factors Engineering of Visual Display Terminal Workstations in 1988 have adopted the MTFA as the standard...References American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI/HFS 100-1988). 1988. Santa Monica
[Examination of patient dose reduction in cardiovasucular X-ray systems with a metal filter].
Yasuda, Mitsuyoshi; Kato, Kyouichi; Tanabe, Nobuaki; Sakiyama, Koushi; Uchiyama, Yushi; Suzuki, Yoshiaki; Suzuki, Hiroshi; Nakazawa, Yasuo
2012-01-01
In interventional X-ray for cardiology of flat panel digital detector (FPD), the phenomenon that exposure dose was suddenly increased when a subject thickness was thickened was recognized. At that time, variable metal built-in filters in FPD were all off. Therefore, we examined whether dose reduction was possible without affecting a clinical image using metal filter (filter) which we have been conventionally using for dose reduction. About 45% dose reduction was achieved when we measured an exposure dose at 30 cm of acrylic thickness in the presence of a filter. In addition, we measured signal to noise ratio/contrast to noise ratio/a resolution limit by the visual evaluation, and there was no influence by filter usage. In the clinical examination, visual evaluation of image quality of coronary angiography (40 cases) using a 5-point evaluation scale by a physician was performed. As a result, filter usage did not influence the image quality (p=NS). Therefore, reduction of sudden increase of exposure dose was achieved without influencing an image quality by adding filter to FPD.
Gatidis, Sergios; Würslin, Christian; Seith, Ferdinand; Schäfer, Jürgen F; la Fougère, Christian; Nikolaou, Konstantin; Schwenzer, Nina F; Schmidt, Holger
2016-01-01
Optimization of tracer dose regimes in positron emission tomography (PET) imaging is a trade-off between diagnostic image quality and radiation exposure. The challenge lies in defining minimal tracer doses that still result in sufficient diagnostic image quality. In order to find such minimal doses, it would be useful to simulate tracer dose reduction as this would enable to study the effects of tracer dose reduction on image quality in single patients without repeated injections of different amounts of tracer. The aim of our study was to introduce and validate a method for simulation of low-dose PET images enabling direct comparison of different tracer doses in single patients and under constant influencing factors. (18)F-fluoride PET data were acquired on a combined PET/magnetic resonance imaging (MRI) scanner. PET data were stored together with the temporal information of the occurrence of single events (list-mode format). A predefined proportion of PET events were then randomly deleted resulting in undersampled PET data. These data sets were subsequently reconstructed resulting in simulated low-dose PET images (retrospective undersampling of list-mode data). This approach was validated in phantom experiments by visual inspection and by comparison of PET quality metrics contrast recovery coefficient (CRC), background-variability (BV) and signal-to-noise ratio (SNR) of measured and simulated PET images for different activity concentrations. In addition, reduced-dose PET images of a clinical (18)F-FDG PET dataset were simulated using the proposed approach. (18)F-PET image quality degraded with decreasing activity concentrations with comparable visual image characteristics in measured and in corresponding simulated PET images. This result was confirmed by quantification of image quality metrics. CRC, SNR and BV showed concordant behavior with decreasing activity concentrations for measured and for corresponding simulated PET images. Simulation of dose-reduced datasets based on clinical (18)F-FDG PET data demonstrated the clinical applicability of the proposed data. Simulation of PET tracer dose reduction is possible with retrospective undersampling of list-mode data. Resulting simulated low-dose images have equivalent characteristics with PET images actually measured at lower doses and can be used to derive optimal tracer dose regimes.
Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system
NASA Astrophysics Data System (ADS)
Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu
2018-09-01
A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.
NASA Astrophysics Data System (ADS)
Kusyk, Janusz; Eskicioglu, Ahmet M.
2005-10-01
Digital watermarking is considered to be a major technology for the protection of multimedia data. Some of the important applications are broadcast monitoring, copyright protection, and access control. In this paper, we present a semi-blind watermarking scheme for embedding a logo in color images using the DFT domain. After computing the DFT of the luminance layer of the cover image, the magnitudes of DFT coefficients are compared, and modified. A given watermark is embedded in three frequency bands: Low, middle, and high. Our experiments show that the watermarks extracted from the lower frequencies have the best visual quality for low pass filtering, adding Gaussian noise, JPEG compression, resizing, rotation, and scaling, and the watermarks extracted from the higher frequencies have the best visual quality for cropping, intensity adjustment, histogram equalization, and gamma correction. Extractions from the fragmented and translated image are identical to extractions from the unattacked watermarked image. The collusion and rewatermarking attacks do not provide the hacker with useful tools.
Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization
Chiu, Chung-Cheng; Ting, Chih-Chung
2016-01-01
Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. PMID:27338412
Visualizing planetary data by using 3D engines
NASA Astrophysics Data System (ADS)
Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.
2017-09-01
We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.
Antosh, Ivan J; DeVine, John G; Carpenter, Clyde T; Woebkenberg, Brian J; Yoest, Stephen M
2010-12-01
Disc arthroplasty is an alternative to fusion following anterior discectomy when treating either cervical radiculopathy or myelopathy. Its theoretical benefits include preservation of the motion segment and the potential prevention of adjacent-segment degeneration. There is a paucity of data regarding the ability to use MR imaging to evaluate the adjacent segments. The purpose of this study was for the authors to introduce open MR imaging as an alternative method in imaging adjacent segments following cervical disc arthroplasty using a Co-Cr implant and to report their preliminary results using this technique. Postoperative cervical MR images were obtained in the first 16 patients in whom the porous coated motion (PCM-V) cervical arthroplasty system was used to treat a single level between C-3 and C-7. Imaging was performed in all 16 patients with a closed 1.5-T unit, and in the final 6 patients it was also performed with an open 0.2-T unit. All images were evaluated by an independent radiologist observer for the ability to visualize the superior endplate, disc space, and inferior endplate at the superior and inferior adjacent levels. Utilizing the 1.5-T magnet to assess the superior adjacent level, the superior endplate, disc space, and inferior endplate could each be visualized less than 50% of the time on sagittal T1- and sagittal and axial T2-weighted images. Similarly, the inferior adjacent level structures were adequately visualized less than 50% of the time, with the exception of slightly improved visualization of the inferior endplate on T1-weighted images (56%). Axial images allowed worse visualization than sagittal images at both the superior and inferior adjacent levels. Utilizing the 0.2-T magnet to assess the superior and inferior adjacent levels, the superior endplate, disc space, and inferior endplate were adequately visualized in 100% of images. Based on the results of this case series, it appears that the strength of the magnet affects the artifact from the Co-Cr endplates. The open 0.2-T MR imaging unit reduces artifact at adjacent levels after cervical disc arthroplasty without a significant reduction in the image quality. Magnetic resonance imaging can be used to evaluate the adjacent segments after disc arthroplasty if magnet strength is addressed, providing another means to assess the long-term efficacy of this novel treatment.
Image Information Mining Utilizing Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai
2002-01-01
The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.
Sugeng, Lissa; Shernan, Stanton K; Weinert, Lynn; Shook, Doug; Raman, Jai; Jeevanandam, Valluvan; DuPont, Frank; Fox, John; Mor-Avi, Victor; Lang, Roberto M
2008-12-01
Recently, a novel real-time 3-dimensional (3D) matrix-array transesophageal echocardiographic (3D-MTEE) probe was found to be highly effective in the evaluation of native mitral valves (MVs) and other intracardiac structures, including the interatrial septum and left atrial appendage. However, the ability to visualize prosthetic valves using this transducer has not been evaluated. Moreover, the diagnostic accuracy of this new technology has never been validated against surgical findings. This study was designed to (1) assess the quality of 3D-MTEE images of prosthetic valves and (2) determine the potential value of 3D-MTEE imaging in the preoperative assessment of valvular pathology by comparing images with surgical findings. Eighty-seven patients undergoing clinically indicated transesophageal echocardiography were studied. In 40 patients, 3D-MTEE images of prosthetic MVs, aortic valves (AVs), and tricuspid valves (TVs) were scored for the quality of visualization. For both MVs and AVs, mechanical and bioprosthetic valves, the rings and leaflets were scored individually. In 47 additional patients, intraoperative 3D-MTEE diagnoses of MV pathology obtained before initiating cardiopulmonary bypass were compared with surgical findings. For the visualization of prosthetic MVs and annuloplasty rings, quality was superior compared with AV and TV prostheses. In addition, 3D-MTEE imaging had 96% agreement with surgical findings. Three-dimensional matrix-array transesophageal echocardiographic imaging provides superb imaging and accurate presurgical evaluation of native MV pathology and prostheses. However, the current technology is less accurate for the clinical assessment of AVs and TVs. Fast acquisition and immediate online display will make this the modality of choice for MV surgical planning and postsurgical follow-up.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
Human Connectome Project Informatics: quality control, database services, and data visualization
Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.
2013-01-01
The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591
NMF-Based Image Quality Assessment Using Extreme Learning Machine.
Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun
2017-01-01
Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.
A new image representation for compact and secure communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Lakshman; Skourikhine, A. N.
In many areas of nuclear materials management there is a need for communication, archival, and retrieval of annotated image data between heterogeneous platforms and devices to effectively implement safety, security, and safeguards of nuclear materials. Current image formats such as JPEG are not ideally suited in such scenarios as they are not scalable to different viewing formats, and do not provide a high-level representation of images that facilitate automatic object/change detection or annotation. The new Scalable Vector Graphics (SVG) open standard for representing graphical information, recommended by the World Wide Web Consortium (W3C) is designed to address issues of imagemore » scalability, portability, and annotation. However, until now there has been no viable technology to efficiently field images of high visual quality under this standard. Recently, LANL has developed a vectorized image representation that is compatible with the SVG standard and preserves visual quality. This is based on a new geometric framework for characterizing complex features in real-world imagery that incorporates perceptual principles of processing visual information known from cognitive psychology and vision science, to obtain a polygonal image representation of high fidelity. This representation can take advantage of all textual compression and encryption routines unavailable to other image formats. Moreover, this vectorized image representation can be exploited to facilitate automated object recognition that can reduce time required for data review. The objects/features of interest in these vectorized images can be annotated via animated graphics to facilitate quick and easy display and comprehension of processed image content.« less
Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU
NASA Astrophysics Data System (ADS)
Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.
2007-03-01
In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.
East, James E; Vleugels, Jasper L; Roelandt, Philip; Bhandari, Pradeep; Bisschops, Raf; Dekker, Evelien; Hassan, Cesare; Horgan, Gareth; Kiesslich, Ralf; Longcroft-Wheaton, Gaius; Wilson, Ana; Dumonceau, Jean-Marc
2016-11-01
Background and aim: This technical review is an official statement of the European Society of Gastrointestinal Endoscopy (ESGE). It addresses the utilization of advanced endoscopic imaging in gastrointestinal (GI) endoscopy. Methods: This technical review is based on a systematic literature search to evaluate the evidence supporting the use of advanced endoscopic imaging throughout the GI tract. Technologies considered include narrowed-spectrum endoscopy (narrow band imaging [NBI]; flexible spectral imaging color enhancement [FICE]; i-Scan digital contrast [I-SCAN]), autofluorescence imaging (AFI), and confocal laser endomicroscopy (CLE). The Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was adopted to define the strength of recommendation and the quality of evidence. Main recommendations: 1. We suggest advanced endoscopic imaging technologies improve mucosal visualization and enhance fine structural and microvascular detail. Expert endoscopic diagnosis may be improved by advanced imaging, but as yet in community-based practice no technology has been shown consistently to be diagnostically superior to current practice with high definition white light. (Low quality evidence.) 2. We recommend the use of validated classification systems to support the use of optical diagnosis with advanced endoscopic imaging in the upper and lower GI tracts (strong recommendation, moderate quality evidence). 3. We suggest that training improves performance in the use of advanced endoscopic imaging techniques and that it is a prerequisite for use in clinical practice. A learning curve exists and training alone does not guarantee sustained high performances in clinical practice. (Weak recommendation, low quality evidence.) Conclusion: Advanced endoscopic imaging can improve mucosal visualization and endoscopic diagnosis; however it requires training and the use of validated classification systems. © Georg Thieme Verlag KG Stuttgart · New York.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, X; Lei, Y; Zheng, D
2016-06-15
Purpose: High Dose Rate (HDR) brachytherapy poses a special challenge to radiation safety and quality assurance (QA) due to its high radioactivity, and it is thus critical to verify the HDR source location and its radioactive strength. This study demonstrates a new method for measuring HDR source location and radioactivity utilizing thermal imaging. A potential application would relate to HDR QA and safety improvement. Methods: Heating effects by an HDR source were studied using Finite Element Analysis (FEA). Thermal cameras were used to visualize an HDR source inside a plastic applicator made of polyvinylidene difluoride (PVDF). Using different source dwellmore » times, correlations between the HDR source strength and heating effects were studied, thus establishing potential daily QA criteria using thermal imaging Results: For an Ir1?2 source with a radioactivity of 10 Ci, the decay-induced heating power inside the source is ∼13.3 mW. After the HDR source was extended into the PVDF applicator and reached thermal equilibrium, thermal imaging visualized the temperature gradient of 10 K/cm along the PVDF applicator surface, which agreed with FEA modeling. For Ir{sup 192} source activities ranging from 4.20–10.20 Ci, thermal imaging could verify source activity with an accuracy of 6.3% with a dwell time of 10 sec, and an accuracy of 2.5 % with 100 sec. Conclusion: Thermal imaging is a feasible tool to visualize HDR source dwell positions and verify source integrity. Patient safety and treatment quality will be improved by integrating thermal measurements into HDR QA procedures.« less
Recognition and prevention of computed radiography image artifacts.
Hammerstrom, Kevin; Aldrich, John; Alves, Len; Ho, Andrew
2006-09-01
Initiated by complaints of image artifacts, a thorough visual and radiographic investigation of 197 Fuji, 35 Agfa, and 37 Kodak computed radiography (CR) cassettes with imaging plates (IPs) in clinical use at four radiology departments was performed. The investigation revealed that the physical deterioration of the cassettes and IPs was more extensive than previously believed. It appeared that many of the image artifacts were the direct result of premature wear of the cassettes and imaging plates. The results indicate that a quality control program for CR cassettes and IPs is essential and should include not only cleaning of the cassettes and imaging plates on a regular basis, but also visual and radiographic image inspection to limit the occurrence of image artifacts and to prolong the life cycle of the CR equipment.
Toward semantic-based retrieval of visual information: a model-based approach
NASA Astrophysics Data System (ADS)
Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman
2002-07-01
This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.
The Role of Teleophthalmology in the Management of Diabetic Retinopathy.
Salongcay, Recivall P; Silva, Paolo S
2018-01-01
The emergence of diabetes as a global epidemic is accompanied by the rise in diabetes‑related retinal complications. Diabetic retinopathy, if left undetected and untreated, can lead to severe visual impairment and affect an individual's productivity and quality of life. Globally, diabetic retinopathy remains one of the leading causes of visual loss in the working‑age population. Teleophthalmology for diabetic retinopathy is an innovative means of retinal evaluation that allows identification of eyes at risk for visual loss, thereby preserving vision and decreasing the overall burden to the health care system. Numerous studies worldwide have found teleophthalmology to be a reliable and cost‑efficient alternative to traditional clinical examinations. It has reduced barriers to access to specialized eye care in both rural and urban communities. In teleophthalmology applications for diabetic retinopathy, it is critical that standardized protocols in image acquisition and evaluation are used to ensure low image ungradable rates and maintain the quality of images taken. Innovative imaging technology such as ultrawide field imaging has the potential to provide significant benefit with integration into teleophthalmology programs. Teleophthalmology programs for diabetic retinopathy rely on a comprehensive and multidisciplinary approach with partnerships across specialties and health care professionals to attain wider acceptability and allow evidence‑based eye care to reach a much broader population. Copyright 2017 Asia-Pacific Academy of Ophthalmology.
FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.
Mader, Malte; Simon, Ronald; Kurtz, Stefan
2014-03-31
A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.
NASA Astrophysics Data System (ADS)
Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip
2012-06-01
Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, K; Nachabe, R; Racadio, J
Purpose: To define an alternative to antiscatter grid (ASG) removal in angiographic systems which achieves similar patient dose reduction as ASG removal without degrading image quality during pediatric imaging. Methods: This study was approved by the local institution animal care and use committee (IACUC). Six different digital subtraction angiography settings were evaluated that altered the mAs, (100, 70, 50, 35, 25, 17.5% of reference mAs) with and without ASG. Three pigs of 5, 15, and 20 kg (9, 15, and 17 cm abdominal thickness; smaller than a newborn, average 3 yr old, and average 10 year old human abdomen respectively)more » were imaged using the six dose settings with and without ASG. Image quality was defined as the order of vessel branch that is visible relative to the injected vessel. Five interventional radiologists evaluated all images. Image quality and patient dose were statistically compared using analysis of variance and receiver operating curve (ROC) analysis to define the preferred dose level and use of ASG for a minimum visibility of 2nd or 3rd order branches of vessel visibility. Results: ASG grid removal reduces dose by 26% with reduced image quality. Only with the ASG present can 3rd order branches be visualized; 100% mAs is required for 9 cm pig while 70% mAs is adequate for the larger pigs. 2nd order branches can be visualized with ASG at 17.5% mAs for all three pig sizes. Without the ASG, 50%, 35% and 35% mAs is required for smallest to largest pig. Conclusion: Removing ASG reduces patient dose and image quality. Image quality can be improved with the ASG present while further reducing patient dose if an optimized radiographic technique is used. Rami Nachabe is an employee of Philips Health Care; Keith Strauss is a paid consultant of Philips Health Care.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimpe, T; Marchessoux, C; Rostang, J
Purpose: Use of color images in medical imaging has increased significantly the last few years. As of today there is no agreed standard on how color information needs to be visualized on medical color displays, resulting into large variability of color appearance and it making consistency and quality assurance a challenge. This paper presents a proposal for an extension of DICOM GSDF towards color. Methods: Visualization needs for several color modalities (multimodality imaging, nuclear medicine, digital pathology, quantitative imaging applications…) have been studied. On this basis a proposal was made for desired color behavior of color medical display systems andmore » its behavior and effect on color medical images was analyzed. Results: Several medical color modalities could benefit from perceptually linear color visualization for similar reasons as why GSDF was put in place for greyscale medical images. An extension of the GSDF (Greyscale Standard Display Function) to color is proposed: CSDF (color standard display function). CSDF is based on deltaE2000 and offers a perceptually linear color behavior. CSDF uses GSDF as its neutral grey behavior. A comparison between sRGB/GSDF and CSDF confirms that CSDF significantly improves perceptual color linearity. Furthermore, results also indicate that because of the improved perceptual linearity, CSDF has the potential to increase perceived contrast of clinically relevant color features. Conclusion: There is a need for an extension of GSDF towards color visualization in order to guarantee consistency and quality. A first proposal (CSDF) for such extension has been made. Behavior of a CSDF calibrated display has been characterized and compared with sRGB/GSDF behavior. First results indicate that CSDF could have a positive influence on perceived contrast of clinically relevant color features and could offer benefits for quantitative imaging applications. Authors are employees of Barco Healthcare.« less
Objective quality assessment of tone-mapped images.
Yeganeh, Hojatollah; Wang, Zhou
2013-02-01
Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has the best quality. Without an appropriate quality measure, different TMOs cannot be compared, and further improvement is directionless. Subjective rating may be a reliable evaluation method, but it is expensive and time consuming, and more importantly, is difficult to be embedded into optimization frameworks. Here we propose an objective quality assessment algorithm for tone-mapped images by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure on the basis of intensity statistics of natural images. Validations using independent subject-rated image databases show good correlations between subjective ranking score and the proposed tone-mapped image quality index (TMQI). Furthermore, we demonstrate the extended applications of TMQI using two examples-parameter tuning for TMOs and adaptive fusion of multiple tone-mapped images.
A survey of quality measures for gray-scale image compression
NASA Technical Reports Server (NTRS)
Eskicioglu, Ahmet M.; Fisher, Paul S.
1993-01-01
Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.
Model-based quantification of image quality
NASA Technical Reports Server (NTRS)
Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.
1989-01-01
In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.
Joint image registration and fusion method with a gradient strength regularization
NASA Astrophysics Data System (ADS)
Lidong, Huang; Wei, Zhao; Jun, Wang
2015-05-01
Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Neubauer, Jakob; Benndorf, Matthias; Lang, Hannah; Lampert, Florian; Kemna, Lars; Konstantinidis, Lukas; Neubauer, Claudia; Reising, Kilian; Zajonc, Horst; Kotter, Elmar; Langer, Mathias; Goerke, Sebastian M
2015-08-01
To compare the visualization of cortical fractures, cortical defects, and orthopedic screws in a dedicated extremity flat-panel computed tomography (FPCT) scanner and a multidetector computed tomography (MDCT) scanner.We used feet of European roe deer as phantoms for cortical fractures, cortical defects, and implanted orthopedic screws. FPCT and MDCT scans were performed with equivalent dose settings. Six observers rated the scans according to number of fragments, size of defects, size of defects opposite orthopedic screws, and the length of different screws. The image quality regarding depiction of the cortical bone was assessed. The gold standard (real number of fragments) was evaluated by autopsy.The correlation of reader assessment of fragments, cortical defects, and screws with the gold standard was similar for FPCT and MDCT. Three readers rated the subjective image quality of the MDCT to be higher, whereas the others showed no preferences.Although the image quality was rated higher in the MDCT than in the FPCT by 3 out of 6 observers, both modalities proved to be comparable regarding the visualization of cortical fractures, cortical defects, and orthopedic screws and of use to musculoskeletal radiology regarding fracture detection and postsurgical evaluation in our experimental setting.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian
2017-05-01
A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.
A comparison of sequential and spiral scanning techniques in brain CT.
Pace, Ivana; Zarb, Francis
2015-01-01
To evaluate and compare image quality and radiation dose of sequential computed tomography (CT) examinations of the brain and spiral CT examinations of the brain imaged on a GE HiSpeed NX/I Dual Slice 2CT scanner. A random sample of 40 patients referred for CT examination of the brain was selected and divided into 2 groups. Half of the patients were scanned using the sequential technique; the other half were scanned using the spiral technique. Radiation dose data—both the computed tomography dose index (CTDI) and the dose length product (DLP)—were recorded on a checklist at the end of each examination. Using the European Guidelines on Quality Criteria for Computed Tomography, 4 radiologists conducted a visual grading analysis and rated the level of visibility of 6 anatomical structures considered necessary to produce images of high quality. The mean CTDI(vol) and DLP values were statistically significantly higher (P <.05) with the sequential scans (CTDI(vol): 22.06 mGy; DLP: 304.60 mGy • cm) than with the spiral scans (CTDI(vol): 14.94 mGy; DLP: 229.10 mGy • cm). The mean image quality rating scores for all criteria of the sequential scanning technique were statistically significantly higher (P <.05) in the visual grading analysis than those of the spiral scanning technique. In this local study, the sequential technique was preferred over the spiral technique for both overall image quality and differentiation between gray and white matter in brain CT scans. Other similar studies counter this finding. The radiation dose seen with the sequential CT scanning technique was significantly higher than that seen with the spiral CT scanning technique. However, image quality with the sequential technique was statistically significantly superior (P <.05).
Hastings, Gareth D; Marsack, Jason D; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A
2017-05-01
To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ± S.D. was -0.06 ± 0.04 with both refractions; dilated was -0.05 ± 0.04 with the objective, and -0.05 ± 0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred over subjective refraction. Subjective refraction was preferred by habitually undercorrected hyperopic eyes. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten
2016-08-01
A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images.
Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan
2018-05-01
The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.
Clinical evaluation of CR versus plain film for neonatal ICU applications
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Brasch, Robert C.; Gooding, Charles A.; Gould, Robert G.; Huang, H. K.
1995-05-01
The clinical utility of computed radiography (CR) versus screen-film for neonatal intensive care unit (ICU) applications is investigated. The latest versions of standard ST-V and high- resolution HR-V CR imaging plates were compared via measurements of image contrast, spatial resolution and signal-to-noise. The ST-V imaging plate was found to have equivalent spatial resolution and object detectability at a lower required dose than the HR-V, and was therefore chosen as the CR plate to use in clinical trials in which a modified film cassette containing the CR imaging plate, a conventional screen and film was utilized. For 50 portable neonatal chest examinations, plain film was subjectively compared to the perfectly matched, simultaneously obtained CR hardcopy and softcopy images. Grading of overall image quality was on a scale of one (poor) to five (excellent). Readers rated the visualization of various structures in the chest (i.e., lung parenchyma, pulmonary vasculature, tubes/lines) as well as the visualization of pathologic findings. Preliminary results indicate that the image quality of both CR soft and hardcopy are comparable to plain film and that CR may be a suitable alternative to screen-film imaging for portable neonatal chest x rays.
Panta, Sandeep R; Wang, Runtang; Fries, Jill; Kalyanam, Ravi; Speer, Nicole; Banich, Marie; Kiehl, Kent; King, Margaret; Milham, Michael; Wager, Tor D; Turner, Jessica A; Plis, Sergey M; Calhoun, Vince D
2016-01-01
In this paper we propose a web-based approach for quick visualization of big data from brain magnetic resonance imaging (MRI) scans using a combination of an automated image capture and processing system, nonlinear embedding, and interactive data visualization tools. We draw upon thousands of MRI scans captured via the COllaborative Imaging and Neuroinformatics Suite (COINS). We then interface the output of several analysis pipelines based on structural and functional data to a t-distributed stochastic neighbor embedding (t-SNE) algorithm which reduces the number of dimensions for each scan in the input data set to two dimensions while preserving the local structure of data sets. Finally, we interactively display the output of this approach via a web-page, based on data driven documents (D3) JavaScript library. Two distinct approaches were used to visualize the data. In the first approach, we computed multiple quality control (QC) values from pre-processed data, which were used as inputs to the t-SNE algorithm. This approach helps in assessing the quality of each data set relative to others. In the second case, computed variables of interest (e.g., brain volume or voxel values from segmented gray matter images) were used as inputs to the t-SNE algorithm. This approach helps in identifying interesting patterns in the data sets. We demonstrate these approaches using multiple examples from over 10,000 data sets including (1) quality control measures calculated from phantom data over time, (2) quality control data from human functional MRI data across various studies, scanners, sites, (3) volumetric and density measures from human structural MRI data across various studies, scanners and sites. Results from (1) and (2) show the potential of our approach to combine t-SNE data reduction with interactive color coding of variables of interest to quickly identify visually unique clusters of data (i.e., data sets with poor QC, clustering of data by site) quickly. Results from (3) demonstrate interesting patterns of gray matter and volume, and evaluate how they map onto variables including scanners, age, and gender. In sum, the proposed approach allows researchers to rapidly identify and extract meaningful information from big data sets. Such tools are becoming increasingly important as datasets grow larger.
The study of surgical image quality evaluation system by subjective quality factor method
NASA Astrophysics Data System (ADS)
Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard
2016-03-01
GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.
Wavefront-Guided Scleral Lens Correction in Keratoconus
Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.
2014-01-01
Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371
Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy
NASA Astrophysics Data System (ADS)
Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.
2017-10-01
Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.
Iwashita, Takuji; Nakai, Yousuke; Lee, John G; Park, Do Hyun; Muthusamy, V Raman; Chang, Kenneth J
2012-02-01
Multiple diagnostic and therapeutic endoscopic ultrasound (EUS) procedures have been widely performed using a standard oblique-viewing (OV) curvilinear array (CLA) echoendoscope. Recently, a new, forward-viewing (FV) CLA was developed, with the advantages of improved endoscopic viewing and manipulation of devices. However, the FV-CLA echoendoscope has a narrower ultrasound scanning field, and lacks an elevator, which might represent obstacles for clinical use. The aim of this study was to compare the FV-CLA echoendoscope to the OV-CLA echoendoscope for EUS imaging of abdominal organs, and to assess the feasibility of EUS-guided interventions using the FV-CLA echoendoscope. EUS examinations were first performed and recorded using the OV-CLA echoendoscope, followed immediately by the FV-CLA echoendoscope. Video recordings were then assessed by two independent endosonographers in a blinded fashion. The EUS visualization and image quality of specific abdominal organs/structures were scored. Any indicated fine-needle aspiration (FNA) or intervention was performed using the FV-CLA echoendoscope, with the OV-CLA echoendoscope as salvage upon failure. A total of 21 patients were examined in the study. Both echoendoscopes had similar visualization and image quality for all organs/structures, except the common hepatic duct (CHD), which was seen significantly better with the FV-CLA echoendoscope. EUS interventions were conducted in eight patients, including FNA of pancreatic mass (3), pancreatic cyst (3), and cystgastrostomy (2). The FV-CLA echoendoscope was successful in seven patients. One failed FNA of the pancreatic head cyst was salvaged using the OV-CLA echoendoscope. There were no differences between the FV-CLA echoendoscope and the OV-CLA echoendoscope in visualization or image quality on upper EUS, except for the superior image quality of CHD using the FV-CLA echoendoscope. Therefore, the disadvantages of the FV-CLA echoendoscope appear minimal in light of the potential advantages. © 2011 Journal of Gastroenterology and Hepatology Foundation and Blackwell Publishing Asia Pty Ltd.
New procedures to evaluate visually lossless compression for display systems
NASA Astrophysics Data System (ADS)
Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim
2017-09-01
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
NASA Technical Reports Server (NTRS)
Banks, Daniel W.
2008-01-01
Infrared thermography is a powerful tool for investigating fluid mechanics on flight vehicles. (Can be used to visualize and characterize transition, shock impingement, separation etc.). Updated onboard F-15 based system was used to visualize supersonic boundary layer transition test article. (Tollmien-Schlichting and cross-flow dominant flow fields). Digital Recording improves image quality and analysis capability. (Allows accurate quantitative (temperature) measurements, Greater enhancement through image processing allows analysis of smaller scale phenomena).
NASA Technical Reports Server (NTRS)
Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.
1992-01-01
This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.
Twofold processing for denoising ultrasound medical images.
Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y
2015-01-01
Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.
Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2004-01-01
The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.
How does C-VIEW image quality compare with conventional 2D FFDM?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Jeffrey S., E-mail: nelson.jeffrey@duke.edu; Wells, Jered R.; Baker, Jay A.
Purpose: The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to comparemore » the intrinsic image quality of synthesized 2D C-VIEW and 2D FFDM images in terms of resolution, contrast, and noise. Methods: Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Results: Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than C-VIEW according to both the average observer and automated scores. In addition, between 50% and 70% of C-VIEW images failed to meet the nominal minimum ACR accreditation requirements—primarily due to fiber breaks. Software analysis demonstrated that C-VIEW provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the C-VIEW image (11 lp/mm FFDM, 5 lp/mm C-VIEW) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with C-VIEW. Whereas the FFDM image contained approximately white noise texture, the C-VIEW image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Conclusions: Their analysis demonstrates many instances where the C-VIEW image quality differs from FFDM. Compared to FFDM, C-VIEW offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of C-VIEW images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + C-VIEW performs relative to DBT + FFDM or FFDM alone.« less
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
Visualization for genomics: the Microbial Genome Viewer.
Kerkhoven, Robert; van Enckevort, Frank H J; Boekhorst, Jos; Molenaar, Douwe; Siezen, Roland J
2004-07-22
A Web-based visualization tool, the Microbial Genome Viewer, is presented that allows the user to combine complex genomic data in a highly interactive way. This Web tool enables the interactive generation of chromosome wheels and linear genome maps from genome annotation data stored in a MySQL database. The generated images are in scalable vector graphics (SVG) format, which is suitable for creating high-quality scalable images and dynamic Web representations. Gene-related data such as transcriptome and time-course microarray experiments can be superimposed on the maps for visual inspection. The Microbial Genome Viewer 1.0 is freely available at http://www.cmbi.kun.nl/MGV
Moreno-Martínez, Francisco Javier; Montoro, Pedro R
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object- and word-processing, both in neurological patients and in healthy controls.
A loop resonator for slice-selective in vivo EPR imaging in rats
Hirata, Hiroshi; He, Guanglong; Deng, Yuanmu; Salikhov, Ildar; Petryakov, Sergey; Zweier, Jay L.
2008-01-01
A loop resonator was developed for 300-MHz continuous-wave electron paramagnetic resonance (CW-EPR) spectroscopy and imaging in live rats. A single-turn loop (55 mm in diameter) was used to provide sufficient space for the rat body. Efficiency for generating a radiofrequency magnetic field of 38 µT/W1/2 was achieved at the center of the loop. For the resonator itself, an unloaded quality factor of 430 was obtained. When a 350 g rat was placed in the resonator at the level of the lower abdomen, the quality factor decreased to 18. The sensitive volume in the loop was visualized with a bottle filled with an aqueous solution of the nitroxide spin probe 3-carbamoyl-2,2,5,5-tetramethyl-3-pyrrolin-1-yloxy (3-CP). The resonator was shown to enable EPR imaging in live rats. Imaging was performed for 3-CP that had been infused intravenously into the rat and its distribution was visualized within the lower abdomen. PMID:18006343
Accurate and robust brain image alignment using boundary-based registration.
Greve, Douglas N; Fischl, Bruce
2009-10-15
The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.
Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.
2016-01-01
We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Enhancing the quality of thermographic diagnosis in medicine
NASA Astrophysics Data System (ADS)
Kuklitskaya, A. G.; Olefir, G. I.
2005-12-01
This paper discusses the possibilities of enhancing the quality of thermographic diagnosis in medicine by increasing the objectivity of the processes of recording, visualization, and interpretation of IR images (thermograms) of patients. A test program is proposed for the diagnosis of oncopathology of the mammary glands, involving standard conditions for recording thermograms, visualization of the IR image in several versions of the color palette and shades of grey, its interpretation in accordance with a rigorously specified algorithm that takes into account the temperature regime in the Zakharin-Head zone of the heart, and the drawing of a conclusion based on a statistical analysis of literature data and the results of a survey of more than 3000 patients of the Minsk City Clinical Oncological Dispensary.
Mori, Yutaka; Nomura, Takanori
2013-06-01
In holographic displays, it is undesirable to observe the speckle noises with the reconstructed images. A method for improvement of reconstructed image quality by synthesizing low-coherence digital holograms is proposed. It is possible to obtain speckleless reconstruction of holograms due to low-coherence digital holography. An image sensor records low-coherence digital holograms, and the holograms are synthesized by computational calculation. Two approaches, the threshold-processing and the picking-a-peak methods, are proposed in order to reduce random noise of low-coherence digital holograms. The reconstructed image quality by the proposed methods is compared with the case of high-coherence digital holography. Quantitative evaluation is given to confirm the proposed methods. In addition, the visual evaluation by 15 people is also shown.
Low Cost Embedded Stereo System for Underwater Surveys
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.
2017-11-01
This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.
Interactive visualization tools for the structural biologist.
Porebski, Benjamin T; Ho, Bosco K; Buckle, Ashley M
2013-10-01
In structural biology, management of a large number of Protein Data Bank (PDB) files and raw X-ray diffraction images often presents a major organizational problem. Existing software packages that manipulate these file types were not designed for these kinds of file-management tasks. This is typically encountered when browsing through a folder of hundreds of X-ray images, with the aim of rapidly inspecting the diffraction quality of a data set. To solve this problem, a useful functionality of the Macintosh operating system (OSX) has been exploited that allows custom visualization plugins to be attached to certain file types. Software plugins have been developed for diffraction images and PDB files, which in many scenarios can save considerable time and effort. The direct visualization of diffraction images and PDB structures in the file browser can be used to identify key files of interest simply by scrolling through a list of files.
Filter methods to preserve local contrast and to avoid artifacts in gamut mapping
NASA Astrophysics Data System (ADS)
Meili, Marcel; Küpper, Dennis; Barańczuk, Zofia; Caluori, Ursina; Simon, Klaus
2010-01-01
Contrary to high dynamic range imaging, the preservation of details and the avoidance of artifacts is not explicitly considered in popular color management systems. An effective way to overcome these difficulties is image filtering. In this paper we investigate several image filter concepts for detail preservation as part of a practical gamut mapping strategy. In particular we define four concepts including various image filters and check their performance with a psycho-visual test. Additionally, we compare our performance evaluation to two image quality measures with emphasis on local contrast. Surprisingly, the most simple filter concept performs highly efficient and achieves an image quality which is comparable to the more established but slower methods.
Kalia, Vivek; Fritz, Benjamin; Johnson, Rory; Gilson, Wesley D; Raithel, Esther; Fritz, Jan
2017-09-01
To test the hypothesis that a fourfold CAIPIRINHA accelerated, 10-min, high-resolution, isotropic 3D TSE MRI prototype protocol of the ankle derives equal or better quality than a 20-min 2D TSE standard protocol. Following internal review board approval and informed consent, 3-Tesla MRI of the ankle was obtained in 24 asymptomatic subjects including 10-min 3D CAIPIRINHA SPACE TSE prototype and 20-min 2D TSE standard protocols. Outcome variables included image quality and visibility of anatomical structures using 5-point Likert scales. Non-parametric statistical testing was used. P values ≤0.001 were considered significant. Edge sharpness, contrast resolution, uniformity, noise, fat suppression and magic angle effects were without statistical difference on 2D and 3D TSE images (p > 0.035). Fluid was mildly brighter on intermediate-weighted 2D images (p < 0.001), whereas 3D images had substantially less partial volume, chemical shift and no pulsatile-flow artifacts (p < 0.001). Oblique and curved planar 3D images resulted in mildly-to-substantially improved visualization of joints, spring, bifurcate, syndesmotic, collateral and sinus tarsi ligaments, and tendons (p < 0.001, respectively). 3D TSE MRI with CAIPIRINHA acceleration enables high-spatial resolution oblique and curved planar MRI of the ankle and visualization of ligaments, tendons and joints equally well or better than a more time-consuming anisotropic 2D TSE MRI. • High-resolution 3D TSE MRI improves visualization of ankle structures. • Limitations of current 3D TSE MRI include long scan times. • 3D CAIPIRINHA SPACE allows now a fourfold-accelerated data acquisition. • 3D CAIPIRINHA SPACE enables high-spatial-resolution ankle MRI within 10 min. • 10-min 3D CAIPIRINHA SPACE produces equal-or-better quality than 20-min 2D TSE.
Herrerías Gutiérrez, J M; García Montes, J
1994-01-01
The main objective of this study was to determine whether the effect of a combination of clebopride (0.5 mg) and simethicone (200 mg) would improve echographic visualization of retrogastric organs. An experimental aerogastric induction model was used in 50 healthy volunteers, who received 30 mL of beaten egg white to decrease the quality of echographic visualization. Improvements were evaluated after treatment. The results show that the combination of clebopride and simethicone was better than placebo in reducing gastric distension and in improving the quality of echographic images of the organs located behind the stomach, that is, the gallbladder, pancreas, and left kidney.
Gosch, D; Ratzmer, A; Berauer, P; Kahn, T
2007-09-01
The objective of this study was to examine the extent to which the image quality on mobile C-arms can be improved by an innovative exposure rate control system (grid control). In addition, the possible dose reduction in the pulsed fluoroscopy mode using 25 pulses/sec produced by automatic adjustment of the pulse rate through motion detection was to be determined. As opposed to conventional exposure rate control systems, which use a measuring circle in the center of the field of view, grid control is based on a fine mesh of square cells which are overlaid on the entire fluoroscopic image. The system uses only those cells for exposure control that are covered by the object to be visualized. This is intended to ensure optimally exposed images, regardless of the size, shape and position of the object to be visualized. The system also automatically detects any motion of the object. If a pulse rate of 25 pulses/sec is selected and no changes in the image are observed, the pulse rate used for pulsed fluoroscopy is gradually reduced. This may decrease the radiation exposure. The influence of grid control on image quality was examined using an anthropomorphic phantom. The dose reduction achieved with the help of object detection was determined by evaluating the examination data of 146 patients from 5 different countries. The image of the static phantom made with grid control was always optimally exposed, regardless of the position of the object to be visualized. The average dose reduction when using 25 pulses/sec resulting from object detection and automatic down-pulsing was 21 %, and the maximum dose reduction was 60 %. Grid control facilitates C-arm operation, since optimum image exposure can be obtained independently of object positioning. Object detection may lead to a reduction in radiation exposure for the patient and operating staff.
NASA Astrophysics Data System (ADS)
Santosa, H.; Ernawati, J.; Wulandari, L. D.
2018-03-01
The visual aesthetic experience in urban spaces is important in establishing a comfortable and satisfying experience for the community. The embodiment of a good visual image of urban space will encourage the emergence of positive perceptions and meanings stimulating the community to produce a good reaction to its urban space. Moreover, to establish a Good Governance in urban planning and design, it is necessary to boost and promote a community participation in the process of controlling the visual quality of urban space through the visual quality evaluation on urban street corridors. This study is an early stage as part of the development of ‘Landscape Visual Planning System’ on the commercial street corridor in Malang. Accordingly, the research aims to evaluate the physical characteristics and the public preferences of the spatial and visual aspects in five provincial road corridors in Malang. This study employs a field survey methods, and an environmental aesthetics approach through semantic differential method. The result of the identification of physical characteristics and the assessment of public preferences on the spatial and visual aspects of the five provincial streets serve as the basis for constructing the 3d interactive simulation scenarios in the Landscape Visual Planning System.
MR imaging near metallic implants using MAVRIC SL: initial clinical experience at 3T.
Gutierrez, Luis B; Do, Bao H; Gold, Garry E; Hargreaves, Brian A; Koch, Kevin M; Worters, Pauline W; Stevens, Kathryn J
2015-03-01
To compare the effectiveness of multiacquisition with variable resonance image combination selective (MAVRIC SL) with conventional two-dimensional fast spin-echo (2D-FSE) magnetic resonance (MR) techniques at 3T in imaging patients with a variety of metallic implants. Twenty-one 3T MR studies were obtained in 19 patients with different types of metal implants. Paired MAVRIC SL and 2D-FSE sequences were reviewed by two radiologists and compared for in-plane and through-plane metal artifact, visualization of the bone implant interface and surrounding soft tissues, blurring, and overall image quality using a two-tailed Wilcoxon signed rank test. The area of artifact on paired images was measured and compared using a paired Wilcoxon signed rank test. Changes in patient management resulting from MAVRIC SL imaging were documented. Significantly less in-plane and through-plane artifact was seen with MAVRIC SL, with improved visualization of the bone-implant interface and surrounding soft tissues, and superior overall image quality (P = .0001). Increased blurring was seen with MAVRIC SL (P = .0016). MAVRIC SL significantly decreased the image artifact compared to 2D-FSE (P = .0001). Inclusion of MAVRIC SL to the imaging protocol determined the need for surgery or type of surgery in five patients and ruled out the need for surgery in 13 patients. In three patients, the area of interest was well seen on both MAVRIC SL and 2D-FSE images, so the addition of MAVRIC had no effect on patient management. Imaging around metal implants with MAVRIC SL at 3T significantly improved image quality and decreased image artifact compared to conventional 2D-FSE imaging techniques and directly impacted patient management. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Pruzan, Alison N; Kaufman, Audrey E; Calcagno, Claudia; Zhou, Yu; Fayad, Zahi A; Mani, Venkatesh
2017-02-28
To demonstrate feasibility of vessel wall imaging of the superficial palmar arch using high frequency micro-ultrasound, 7T and 3T magnetic resonance imaging (MRI). Four subjects (ages 22-50 years) were scanned on a micro-ultrasound system with a 45-MHz transducer (Vevo 2100, VisualSonics). Subjects' hands were then imaged on a 3T clinical MR scanner (Siemens Biograph MMR) using an 8-channel special purpose phased array carotid coil. Lastly, subjects' hands were imaged on a 7T clinical MR scanner (Siemens Magnetom 7T Whole Body Scanner) using a custom built 8-channel transmit receive carotid coil. All three imaging modalities were subjectively analyzed for image quality and visualization of the vessel wall. Results of this very preliminary study indicated that vessel wall imaging of the superficial palmar arch was feasible with a whole body 7T and 3T MRI in comparison with micro-ultrasound. Subjective analysis of image quality (1-5 scale, 1: poorest, 5: best) from B mode, ultrasound, 3T SPACE MRI and 7T SPACE MRI indicated that the image quality obtained at 7T was superior to both 3T MRI and micro-ultrasound. The 3D SPACE sequence at both 7T and 3T MRI with isotropic voxels allowed for multi-planar reformatting of images and allowed for less operator dependent results as compared to high frequency micro-ultrasound imaging. Although quantitative analysis revealed that there was no significant difference between the three methods, the 7T Tesla trended to have better visibility of the vessel and its wall. Imaging of smaller arteries at the 7T is feasible for evaluating atherosclerosis burden and may be of clinical relevance in multiple diseases.
NASA Astrophysics Data System (ADS)
Karaoglanis, K.; Efthimiou, N.; Tsoumpas, C.
2015-09-01
Low count PET data is a challenge for medical image reconstruction. The statistics of a dataset is a key factor of the quality of the reconstructed images. Reconstruction algorithms which would be able to compensate for low count datasets could provide the means to reduce the patient injected doses and/or reduce the scan times. It has been shown that the use of priors improve the image quality in low count conditions. In this study we compared regularised versus post-filtered OSEM for their performance on challenging simulated low count datasets. Initial visual comparison demonstrated that both algorithms improve the image quality, although the use of regularization does not introduce the undesired blurring as post-filtering.
Avila, Manuel; Graterol, Eduardo; Alezones, Jesús; Criollo, Beisy; Castillo, Dámaso; Kuri, Victoria; Oviedo, Norman; Moquete, Cesar; Romero, Marbella; Hanley, Zaida; Taylor, Margie
2012-06-01
The appearance of rice grain is a key aspect in quality determination. Mainly, this analysis is performed by expert analysts through visual observation; however, due to the subjective nature of the analysis, the results may vary among analysts. In order to evaluate the concordance between analysts from Latin-American rice quality laboratories for rice grain appearance through digital images, an inter-laboratory test was performed with ten analysts and images of 90 grains captured with a high resolution scanner. Rice grains were classified in four categories including translucent, chalky, white belly, and damaged grain. Data was categorized using statistic parameters like mode and its frequency, the relative concordance, and the reproducibility parameter kappa. Additionally, a referential image gallery of typical grain for each category was constructed based on mode frequency. Results showed a Kappa value of 0.49, corresponding to a moderate reproducibility, attributable to subjectivity in the visual analysis of grain images. These results reveal the need for standardize the evaluation criteria among analysts to improve the confidence of the determination of rice grain appearance.
Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.
2015-01-01
Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images.
Johari, Masoumeh; Abdollahzadeh, Milad; Esmaeili, Farzad; Sakhamanesh, Vahideh
2018-01-01
Background: Dental cone beam computed tomography (CBCT) images suffer from severe metal artifacts. These artifacts degrade the quality of acquired image and in some cases make it unsuitable to use. Streaking artifacts and cavities around teeth are the main reason of degradation. Methods: In this article, we have proposed a new artifact reduction algorithm which has three parallel components. The first component extracts teeth based on the modeling of image histogram with a Gaussian mixture model. Striking artifact reduction component reduces artifacts using converting image into the polar domain and applying morphological filtering. The third component fills cavities through a simple but effective morphological filtering operation. Results: Finally, results of these three components are combined into a fusion step to create a visually good image which is more compatible to human visual system. Conclusions: Results show that the proposed algorithm reduces artifacts of dental CBCT images and produces clean images. PMID:29535920
Sosna, Jacob; Pedrosa, Ivan; Dewolf, William C; Mahallati, Houman; Lenkinski, Robert E; Rofsky, Neil M
2004-08-01
To qualitatively compare the image quality of torso phased-array 3-Tesla (3T) imaging of the prostate with that of endorectal 1.5-Tesla imaging. Twenty cases of torso phased-array prostate imaging performed at 3-Tesla with FSE T2 weighted images were evaluated by two readers independently for visualization of the posterior border (PB), seminal vesicles (SV), neurovascular bundles (NVB), and image quality rating (IQR). Studies were performed at large fields of view(FOV) (25 cm) (14 cases) (3TL) and smaller FOV (14 cm) (19 cases) (3TS). A comparison was made to 20 consecutive cases of 1.5-T endorectal evaluation performed during the same time period.Results. 3TL produced a significantly better image quality compared with the small FOV for PB (P = .0001), SV (P =.0001), and IQR (P = .0001). There was a marginally significant difference within the NVB category (P = .0535). 3TL produced an image of similar quality to image quality at 1.5 T for PB (P = .3893), SV (P = .8680), NB (P = .2684), and IQR (P = .8599). Prostate image quality at 3T with a torso phased-array coil can be comparable with that of endorectal 1.5-T imaging. These findings suggest that additional options are now available for magnetic resonance imaging of the prostate gland.
FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research
2014-01-01
Background A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. Results We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. Conclusions The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle. PMID:24684958
Buhk, J-H; Groth, M; Sehner, S; Fiehler, J; Schmidt, N O; Grzyska, U
2013-09-01
To evaluate a novel algorithm for correcting beam hardening artifacts caused by metal implants in computed tomography performed on a C-arm angiography system equipped with a flat panel (FP-CT). 16 datasets of cerebral FP-CT acquisitions after coil embolization of brain aneurysms in the context of acute subarachnoid hemorrhage have been reconstructed by applying a soft tissue kernel with and without a novel reconstruction filter for metal artifact correction. Image reading was performed in multiplanar reformations (MPR) in average mode on a dedicated radiological workplace in comparison to the preinterventional native multisection CT (MS-CT) scan serving as the anatomic gold standard. Two independent radiologists performed image scoring following a defined scale in direct comparison of the image data with and without artifact correction. For statistical analysis, a random intercept model was calculated. The inter-rater agreement was very high (ICC = 86.3 %). The soft tissue image quality and visualization of the CSF spaces at the level of the implants was substantially improved. The additional metal artifact correction algorithm did not induce impairment of the subjective image quality in any other brain regions. Adding metal artifact correction to FP-CT in an acute postinterventional setting helps to visualize the close vicinity of the aneurysm at a generally consistent image quality. © Georg Thieme Verlag KG Stuttgart · New York.
A virtual image chain for perceived image quality of medical display
NASA Astrophysics Data System (ADS)
Marchessoux, Cédric; Jung, Jürgen
2006-03-01
This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.
The effect of image quality, repeated study, and assessment method on anatomy learning.
Fenesi, Barbara; Mackinnon, Chelsea; Cheng, Lucia; Kim, Joseph A; Wainman, Bruce C
2017-06-01
The use of two-dimensional (2D) images is consistently used to prepare anatomy students for handling real specimen. This study examined whether the quality of 2D images is a critical component in anatomy learning. The visual clarity and consistency of 2D anatomical images was systematically manipulated to produce low-quality and high-quality images of the human hand and human eye. On day 0, participants learned about each anatomical specimen from paper booklets using either low-quality or high-quality images, and then completed a comprehension test using either 2D images or three-dimensional (3D) cadaveric specimens. On day 1, participants relearned each booklet, and on day 2 participants completed a final comprehension test using either 2D images or 3D cadaveric specimens. The effect of image quality on learning varied according to anatomical content, with high-quality images having a greater effect on improving learning of hand anatomy than eye anatomy (high-quality vs. low-quality for hand anatomy P = 0.018; high-quality vs. low-quality for eye anatomy P = 0.247). Also, the benefit of high-quality images on hand anatomy learning was restricted to performance on short-answer (SA) questions immediately after learning (high-quality vs. low-quality on SA questions P = 0.018), but did not apply to performance on multiple-choice (MC) questions (high-quality vs. low-quality on MC questions P = 0.109) or after participants had an additional learning opportunity (24 hours later) with anatomy content (high vs. low on SA questions P = 0.643). This study underscores the limited impact of image quality on anatomy learning, and questions whether investment in enhancing image quality of learning aids significantly promotes knowledge development. Anat Sci Educ 10: 249-261. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
Isolating contour information from arbitrary images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1989-01-01
Aspects of natural vision (physiological and perceptual) serve as a basis for attempting the development of a general processing scheme for contour extraction. Contour information is assumed to be central to visual recognition skills. While the scheme must be regarded as highly preliminary, initial results do compare favorably with the visual perception of structure. The scheme pays special attention to the construction of a smallest scale circular difference-of-Gaussian (DOG) convolution, calibration of multiscale edge detection thresholds with the visual perception of grayscale boundaries, and contour/texture discrimination methods derived from fundamental assumptions of connectivity and the characteristics of printed text. Contour information is required to fall between a minimum connectivity limit and maximum regional spatial density limit at each scale. Results support the idea that contour information, in images possessing good image quality, is (centered at about 10 cyc/deg and 30 cyc/deg). Further, lower spatial frequency channels appear to play a major role only in contour extraction from images with serious global image defects.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Quintas, Rodrigo C S; de França, Emmanuel R; de Petribú, Kátia C L; Ximenes, Ricardo A A; Quintas, Lóren F F M; Cavalcanti, Ernando L F; Kitamura, Marco A P; Magalhães, Kássia A A; Paiva, Késsia C F; Filho, Demócrito B Miranda
2014-04-01
The lipodystrophy syndrome is characterized by selective loss of subcutaneous fat on the face and extremities (lipoatrophy) and/or accumulation of fat around the neck, abdomen, and thorax (lipohypertrophy). The aim of this study has been to assess the impact of polymethylmethacrylate facial treatment on quality of life, self-perceived facial image, and the severity of depressive symptoms in patients living with HIV/AIDS. A non-randomized before and after interventional study was developed. Fifty-one patients underwent facial filling. The self-perceived quality of life, facial image, and degree of depressive symptoms were measured by the Short-Form 36 and HIV/AIDS--Targeted quality of life questionnaires, by a visual analogue scale and by the Beck depression inventory, respectively, before and three months after treatment. Six of the eight domains of Short-Form 36 and eight of the nine dimensions of the HIV/AIDS--Targeted quality of life questionnaires, together with the visual analogue scale and by the Beck depression inventory scores, revealed a statistically significant improvement. The only adverse effects registered were edema and ecchymosis. The treatment of facial lipoatrophy improved the self-perceived quality of life and facial image as well as any depressive symptoms among patients with HIV/AIDS. © 2014 The International Society of Dermatology.
MRI Near Metallic Implants Using MAVRIC SL: Initial Clinical Experience at 3T
Gutierrez, Luis B.; Do, Bao H.; Gold, Garry E.; Hargreaves, Brian A.; Koch, Kevin M.; Worters, Pauline W.; Stevens, Kathryn J.
2014-01-01
Rationale and Objectives To compare the effectiveness of MAVRIC SL with conventional 2D-FSE MR techniques at 3T in imaging patients with a variety of metallic implants. Materials and Methods Twenty-one 3T MR studies were obtained in 19 patients with different types of metal implants. Paired MAVRIC SL and 2D-FSE sequences were reviewed by 2 radiologists, and compared for in-plane and through-plane metal artifact, visualization of the bone implant interface and surrounding soft tissues, blurring, and overall image quality using a 2-tailed Wilcoxon signed rank test. The area of artifact on paired images was measured and compared using a paired Wilcoxon signed rank test. Changes in patient management resulting from MAVRIC SL imaging were documented. Results Significantly less in-plane and through-plane artifact was seen with MAVRIC SL, with improved visualization of the bone-implant interface and surrounding soft tissues, and superior overall image quality (p = 0.0001). Increased blurring was seen with MAVRIC SL (p=0.0016). MAVRIC SL significantly decreased the image artifact compared to 2D-FSE (p=0.0001). Inclusion of MAVRIC SL to the imaging protocol determined the need for surgery or type of surgery in 5 patients, and ruled out the need for surgery in 13 patients. In 3 patients the area of interest was well seen on both MAVRIC SL and 2D-FSE images, so the addition of MAVRIC had no effect on patient management. Conclusion Imaging around metal implants with MAVRIC SL at 3T significantly improved image quality and decreased image artifact compared to conventional 2D-FSE imaging techniques, and directly impacted patient management. PMID:25435186
USDA-ARS?s Scientific Manuscript database
Current meat inspection in slaughter plants, for food safety and quality attributes including potential fecal contamination, is conducted through by visual examination human inspectors. A handheld fluorescence-based imaging device (HFID) was developed to be an assistive tool for human inspectors by ...
Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...
Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.
2014-01-01
Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Sunyit Visiting Faculty Research
2012-01-01
deblurring with Gaussian and impulse noise . Improvements in both PSNR and visual quality of IFASDA over a typical existing method are demonstrated...blurring Images Corrupted by Mixed Impulse plus Gaussian Noise / Department of Mathematics Syracuse University This work studies a problem of image...restoration that observed images are contaminated by Gaussian and impulse noise . Existing methods in the literature are based on minimizing an objective
Olcay, Ayhan; Guler, Ekrem; Karaca, Ibrahim Oguz; Omaygenc, Mehmet Onur; Kizilirmak, Filiz; Olgun, Erkam; Yenipinar, Esra; Cakmak, Huseyin Altug; Duman, Dursun
2015-04-01
Use of last fluoro hold (LFH) mode in fluoroscopy, which enables the last live image to be saved and displayed, could reduce radiation during percutaneous coronary intervention when compared with cine mode. No previous study compared coronary angiography radiation doses and image quality between LFH and conventional cine mode techniques. We compared cumulative dose-area product (DAP), cumulative air kerma, fluoroscopy time, contrast use, interobserver variability of visual assessment between LFH angiography, and conventional cine angiography techniques. Forty-six patients were prospectively enrolled into the LFH group and 82 patients into the cine angiography group according to operator decision. Mean cumulative DAP was higher in the cine group vs the LFH group (50058.98 ± 53542.71 mGy•cm² vs 11349.2 ± 8796.46 mGy•cm²; P<.001). Mean fluoroscopy times were higher in the cine group vs the LFH group (3.87 ± 5.08 minutes vs 1.66 ± 1.51 minutes; P<.01). Mean contrast use was higher in the cine group vs the LFH group (112.07 ± 43.79 cc vs 88.15 ± 23.84 cc; P<.001). Mean value of Crombach's alpha was not statistically different between visual estimates of three operators between cine and LFH angiography groups (0.66680 ± 0.19309 vs 0.54193 ± 0.31046; P=.20). Radiation doses, contrast use, and fluoroscopy times are lower in fluoroscopic LFH angiography vs cine angiography. Interclass variability of visual stenosis estimation between three operators was not different between cine and LFH groups. Fluoroscopic LFH images conventionally have inferior diagnostic quality when compared with cine coronary angiography, but with new angiographic systems with improved LFH image quality, these images may be adequate for diagnostic coronary angiography.
van der Jagt, M A; Brink, W M; Versluis, M J; Steens, S C A; Briaire, J J; Webb, A G; Frijns, J H M; Verbist, B M
2015-02-01
In many centers, MR imaging of the inner ear and auditory pathway performed on 1.5T or 3T systems is part of the preoperative work-up of cochlear implants. We investigated the applicability of clinical inner ear MR imaging at 7T and compared the visibility of inner ear structures and nerves within the internal auditory canal with images acquired at 3T. Thirteen patients with sensorineural hearing loss eligible for cochlear implantation underwent examinations on 3T and 7T scanners. Two experienced head and neck radiologists evaluated the 52 inner ear datasets. Twenty-four anatomic structures of the inner ear and 1 overall score for image quality were assessed by using a 4-point grading scale for the degree of visibility. The visibility of 11 of the 24 anatomic structures was rated higher on the 7T images. There was no significant difference in the visibility of 13 anatomic structures and the overall quality rating. A higher incidence of artifacts was observed in the 7T images. The gain in SNR at 7T yielded a more detailed visualization of many anatomic structures, especially delicate ones, despite the challenges accompanying MR imaging at a high magnetic field. © 2015 by American Journal of Neuroradiology.
Color image quality in projection displays: a case study
NASA Astrophysics Data System (ADS)
Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter
2005-01-01
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
Color image quality in projection displays: a case study
NASA Astrophysics Data System (ADS)
Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter
2004-10-01
Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them therefore harder to predict.
Facial identification in very low-resolution images simulating prosthetic vision.
Chang, M H; Kim, H S; Shin, J H; Park, K S
2012-08-01
Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuzaki, Y; Jenkins, C; Yang, Y
Purpose: With the growing adoption of proton beam therapy there is an increasing need for effective and user-friendly tools for performing quality assurance (QA) measurements. The speed and versatility of spot-scanning proton beam (PB) therapy systems present unique challenges for traditional QA tools. To address these challenges a proof-of-concept system was developed to visualize, in real-time, the delivery of individual spots from a spot-scanning PB in order to perform QA measurements. Methods: The PB is directed toward a custom phantom with planar faces coated with a radioluminescent phosphor (Gd2O2s:Tb). As the proton beam passes through the phantom visible light ismore » emitted from the coating and collected by a nearby CMOS camera. The images are processed to determine the locations at which the beam impinges on each face of the phantom. By so doing, the location of each beam can be determined relative to the phantom. The cameras are also used to capture images of the laser alignment system. The phantom contains x-ray fiducials so that it can be easily located with kV imagers. Using this data several quality assurance parameters can be evaluated. Results: The proof-of-concept system was able to visualize discrete PB spots with energies ranging from 70 MeV to 220 MeV. Images were obtained with integration times ranging from 20 to 0.019 milliseconds. If not limited by data transmission, this would correspond to a frame rate of 52,000 fps. Such frame rates enabled visualization of individual spots in real time. Spot locations were found to be highly correlated (R{sup 2}=0.99) with the nozzle-mounted spot position monitor indicating excellent spot positioning accuracy Conclusion: The system was shown to be capable of imaging individual spots for all clinical beam energies. Future development will focus on extending the image processing software to provide automated results for a variety of QA tests.« less
Ferré, Jean-Christophe; Petr, Jan; Bannier, Elise; Barillot, Christian; Gauvrit, Jean-Yves
2012-05-01
To compare 12-channel and 32-channel phased-array coils and to determine the optimal parallel imaging (PI) technique and factor for brain perfusion imaging using Pulsed Arterial Spin labeling (PASL) at 3 Tesla (T). Twenty-seven healthy volunteers underwent 10 different PASL perfusion PICORE Q2TIPS scans at 3T using 12-channel and 32-channel coils without PI and with GRAPPA or mSENSE using factor 2. PI with factor 3 and 4 were used only with the 32-channel coil. Visual quality was assessed using four parameters. Quantitative analyses were performed using temporal noise, contrast-to-noise and signal-to-noise ratios (CNR, SNR). Compared with 12-channel acquisition, the scores for 32-channel acquisition were significantly higher for overall visual quality, lower for noise and higher for SNR and CNR. With the 32-channel coil, artifact compromise achieved the best score with PI factor 2. Noise increased, SNR and CNR decreased with PI factor. However mSENSE 2 scores were not always significantly different from acquisition without PI. For PASL at 3T, the 32-channel coil at 3T provided better quality than the 12-channel coil. With the 32-channel coil, mSENSE 2 seemed to offer the best compromise for decreasing artifacts without significantly reducing SNR, CNR. Copyright © 2012 Wiley Periodicals, Inc.
Alvarez, Sergio A; Winner, Ellen; Hawley-Dolan, Angelina; Snapper, Leslie
2015-01-01
People with no arts background often misunderstand abstract art as requiring no skill. However, adults with no art background discriminate paintings by abstract expressionists from superficially similar works by children and animals. We tested whether participants show different visual exploration when looking at paintings by artists' versus children or animals. Participants sat at an eye tracker and viewed paintings by artists paired with "similar" paintings by children or animals, and were asked which they preferred and which was better. Mean duration of eye gaze fixations, total fixation time, and spatial extent of visual exploration was greater to the artist than child or animal images in response to quality but not preference. Pupil dilation was greater to the artist images in response to both questions and greater in response to the quality than preference question. Explicit selections of images paralleled total fixation times: Participants selected at chance for preference, but selected the artist images above chance in response to quality. Results show that lay adults respond differently on both an implicit as well as explicit measure when thinking about preference versus quality in art and discriminate abstract paintings by artists from superficially similar works by children and animals, despite the popular misconception by the average viewer that "my kid could have done that." © The Author(s) 2015.
New Software Developments for Quality Mesh Generation and Optimization from Biomedical Imaging Data
Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko
2013-01-01
In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. PMID:24252469
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki
2018-05-01
We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.
Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo
2015-01-01
Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality. PMID:26473119
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
Benz, Dominik C; Gräni, Christoph; Mikulicic, Fran; Vontobel, Jan; Fuchs, Tobias A; Possner, Mathias; Clerc, Olivier F; Stehli, Julia; Gaemperli, Oliver; Pazhenkottil, Aju P; Buechel, Ronny R; Kaufmann, Philipp A
The clinical utility of a latest generation iterative reconstruction algorithm (adaptive statistical iterative reconstruction [ASiR-V]) has yet to be elucidated for coronary computed tomography angiography (CCTA). This study evaluates the impact of ASiR-V on signal, noise and image quality in CCTA. Sixty-five patients underwent clinically indicated CCTA on a 256-slice CT scanner using an ultralow-dose protocol. Data sets from each patient were reconstructed at 6 different levels of ASiR-V. Signal intensity was measured by placing a region of interest in the aortic root, LMA, and RCA. Similarly, noise was measured in the aortic root. Image quality was visually assessed by 2 readers. Median radiation dose was 0.49 mSv. Image noise decreased with increasing levels of ASiR-V resulting in a significant increase in signal-to-noise ratio in the RCA and LMA (P < 0.001). Correspondingly, image quality significantly increased with higher levels of ASiR-V (P < 0.001). ASiR-V yields substantial noise reduction and improved image quality enabling introduction of ultralow-dose CCTA.
Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.
Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh
2016-12-01
Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.
Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques
Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh
2016-01-01
Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898
Digital to analog conversion and visual evaluation of Thematic Mapper data
McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.
1985-01-01
As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.
Digital to Analog Conversion and Visual Evaluation of Thematic Mapper Data
McCord, James R.; Binnie, Douglas R.; Seevers, Paul M.
1985-01-01
As a part of the National Aeronautics and Space Administration Landsat D Image Data Quality Analysis Program, the Earth Resources Observation Systems Data Center (EDC) developed procedures to optimize the visual information content of Thematic Mapper data and evaluate the resulting photographic products by visual interpretation. A digital-to-analog transfer function was developed which would properly place the digital values on the most useable portion of a film response curve. Individual black-and-white transparencies generated using the resulting look-up tables were utilized in the production of color-composite images with varying band combinations. Four experienced photointerpreters ranked 2-cm-diameter (0. 75 inch) chips of selected image features of each band combination for ease of interpretability. A nonparametric rank-order test determined the significance of interpreter preference for the band combinations.
Compressed-Sensing Multi-Spectral Imaging of the Post-Operative Spine
Worters, Pauline W.; Sung, Kyunghyun; Stevens, Kathryn J.; Koch, Kevin M.; Hargreaves, Brian A.
2012-01-01
Purpose To apply compressed sensing (CS) to in vivo multi-spectral imaging (MSI), which uses additional encoding to avoid MRI artifacts near metal, and demonstrate the feasibility of CS-MSI in post-operative spinal imaging. Materials and Methods Thirteen subjects referred for spinal MRI were examined using T2-weighted MSI. A CS undersampling factor was first determined using a structural similarity index as a metric for image quality. Next, these fully sampled datasets were retrospectively undersampled using a variable-density random sampling scheme and reconstructed using an iterative soft-thresholding method. The fully- and under-sampled images were compared by using a 5-point scale. Prospectively undersampled CS-MSI data were also acquired from two subjects to ensure that the prospective random sampling did not affect the image quality. Results A two-fold outer reduction factor was deemed feasible for the spinal datasets. CS-MSI images were shown to be equivalent or better than the original MSI images in all categories: nerve visualization: p = 0.00018; image artifact: p = 0.00031; image quality: p = 0.0030. No alteration of image quality and T2 contrast was observed from prospectively undersampled CS-MSI. Conclusion This study shows that the inherently sparse nature of MSI data allows modest undersampling followed by CS reconstruction with no loss of diagnostic quality. PMID:22791572
Computer-aided diagnosis based on enhancement of degraded fundus photographs.
Jin, Kai; Zhou, Mei; Wang, Shaoze; Lou, Lixia; Xu, Yufeng; Ye, Juan; Qian, Dahong
2018-05-01
Retinal imaging is an important and effective tool for detecting retinal diseases. However, degraded images caused by the aberrations of the eye can disguise lesions, so that a diseased eye can be mistakenly diagnosed as normal. In this work, we propose a new image enhancement method to improve the quality of degraded images. A new method is used to enhance degraded-quality fundus images. In this method, the image is converted from the input RGB colour space to LAB colour space and then each normalized component is enhanced using contrast-limited adaptive histogram equalization. Human visual system (HVS)-based fundus image quality assessment, combined with diagnosis by experts, is used to evaluate the enhancement. The study included 191 degraded-quality fundus photographs of 143 subjects with optic media opacity. Objective quality assessment of image enhancement (range: 0-1) indicated that our method improved colour retinal image quality from an average of 0.0773 (variance 0.0801) to an average of 0.3973 (variance 0.0756). Following enhancement, area under curves (AUC) were 0.996 for the glaucoma classifier, 0.989 for the diabetic retinopathy (DR) classifier, 0.975 for the age-related macular degeneration (AMD) classifier and 0.979 for the other retinal diseases classifier. The relatively simple method for enhancing degraded-quality fundus images achieves superior image enhancement, as demonstrated in a qualitative HVS-based image quality assessment. This retinal image enhancement may, therefore, be employed to assist ophthalmologists in more efficient screening of retinal diseases and the development of computer-aided diagnosis. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Wang, Lihui; Lawson, Michael J.; Curtis, David D.
2015-01-01
Imagery training has been shown to improve reading comprehension. Recent research has also shown that the quality of visual mental imagery used is important for reading comprehension. A review of literature shows that there has been relatively little detailed research on the quality of imagery used by learners, especially in the case of students…
Lv, Peijie; Liu, Jie; Chai, Yaru; Yan, Xiaopeng; Gao, Jianbo; Dong, Junqiang
2017-01-01
To evaluate the feasibility, image quality, and radiation dose of automatic spectral imaging protocol selection (ASIS) and adaptive statistical iterative reconstruction (ASIR) with reduced contrast agent dose in abdominal multiphase CT. One hundred and sixty patients were randomly divided into two scan protocols (n = 80 each; protocol A, 120 kVp/450 mgI/kg, filtered back projection algorithm (FBP); protocol B, spectral CT imaging with ASIS and 40 to 70 keV monochromatic images generated per 300 mgI/kg, ASIR algorithm. Quantitative parameters (image noise and contrast-to-noise ratios [CNRs]) and qualitative visual parameters (image noise, small structures, organ enhancement, and overall image quality) were compared. Monochromatic images at 50 keV and 60 keV provided similar or lower image noise, but higher contrast and overall image quality as compared with 120-kVp images. Despite the higher image noise, 40-keV images showed similar overall image quality compared to 120-kVp images. Radiation dose did not differ between the two protocols, while contrast agent dose in protocol B was reduced by 33 %. Application of ASIR and ASIS to monochromatic imaging from 40 to 60 keV allowed contrast agent dose reduction with adequate image quality and without increasing radiation dose compared to 120 kVp with FBP. • Automatic spectral imaging protocol selection provides appropriate scan protocols. • Abdominal CT is feasible using spectral imaging and 300 mgI/kg contrast agent. • 50-keV monochromatic images with 50 % ASIR provide optimal image quality.
Mendoza, Patricia; d'Anjou, Marc-André; Carmel, Eric N; Fournier, Eric; Mai, Wilfried; Alexander, Kate; Winter, Matthew D; Zwingenberger, Allison L; Thrall, Donald E; Theoret, Christine
2014-01-01
Understanding radiographic anatomy and the effects of varying patient and radiographic tube positioning on image quality can be a challenge for students. The purposes of this study were to develop and validate a novel technique for creating simulated radiographs using computed tomography (CT) datasets. A DICOM viewer (ORS Visual) plug-in was developed with the ability to move and deform cuboidal volumetric CT datasets, and to produce images simulating the effects of tube-patient-detector distance and angulation. Computed tomographic datasets were acquired from two dogs, one cat, and one horse. Simulated radiographs of different body parts (n = 9) were produced using different angles to mimic conventional projections, before actual digital radiographs were obtained using the same projections. These studies (n = 18) were then submitted to 10 board-certified radiologists who were asked to score visualization of anatomical landmarks, depiction of patient positioning, realism of distortion/magnification, and image quality. No significant differences between simulated and actual radiographs were found for anatomic structure visualization and patient positioning in the majority of body parts. For the assessment of radiographic realism, no significant differences were found between simulated and digital radiographs for canine pelvis, equine tarsus, and feline abdomen body parts. Overall, image quality and contrast resolution of simulated radiographs were considered satisfactory. Findings from the current study indicated that radiographs simulated using this new technique are comparable to actual digital radiographs. Further studies are needed to apply this technique in developing interactive tools for teaching radiographic anatomy and the effects of varying patient and tube positioning. © 2013 American College of Veterinary Radiology.
Blew, Robert M; Lee, Vinson R; Farr, Joshua N; Schiferl, Daniel J; Going, Scott B
2014-02-01
Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale and (2) establish a quantitative motion assessment methodology. Scans were performed on 506 healthy girls (9-13 years) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n = 46) was examined to determine %Move's impact on bone parameters. Agreement between measurers was strong (intraclass correlation coefficient = 0.732 for tibia, 0.812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat and no repeat. The quantitative approach found ≥95% of subjects had %Move <25 %. Comparison of initial and repeat scans by groups above and below 25% initial movement showed significant differences in the >25 % grouping. A pQCT visual inspection scale can be a reliable metric of image quality, but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat.
Blew, Robert M.; Lee, Vinson R.; Farr, Joshua N.; Schiferl, Daniel J.; Going, Scott B.
2013-01-01
Purpose Peripheral quantitative computed tomography (pQCT) is an essential tool for assessing bone parameters of the limbs, but subject movement and its impact on image quality remains a challenge to manage. The current approach to determine image viability is by visual inspection, but pQCT lacks a quantitative evaluation. Therefore, the aims of this study were to (1) examine the reliability of a qualitative visual inspection scale, and (2) establish a quantitative motion assessment methodology. Methods Scans were performed on 506 healthy girls (9–13yr) at diaphyseal regions of the femur and tibia. Scans were rated for movement independently by three technicians using a linear, nominal scale. Quantitatively, a ratio of movement to limb size (%Move) provided a measure of movement artifact. A repeat-scan subsample (n=46) was examined to determine %Move’s impact on bone parameters. Results Agreement between measurers was strong (ICC = .732 for tibia, .812 for femur), but greater variability was observed in scans rated 3 or 4, the delineation between repeat or no repeat. The quantitative approach found ≥95% of subjects had %Move <25%. Comparison of initial and repeat scans by groups above and below 25% initial movement, showed significant differences in the >25% grouping. Conclusions A pQCT visual inspection scale can be a reliable metric of image quality but technicians may periodically mischaracterize subject motion. The presented quantitative methodology yields more consistent movement assessment and could unify procedure across laboratories. Data suggest a delineation of 25% movement for determining whether a diaphyseal scan is viable or requires repeat. PMID:24077875
An efficient visualization method for analyzing biometric data
NASA Astrophysics Data System (ADS)
Rahmes, Mark; McGonagle, Mike; Yates, J. Harlan; Henning, Ronda; Hackett, Jay
2013-05-01
We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner's attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.
NASA Astrophysics Data System (ADS)
Gong, Rui; Xu, Haisong; Wang, Binyu; Luo, Ming Ronnier
2012-08-01
The image quality of two active matrix organic light emitting diode (AMOLED) smart-phone displays and two in-plane switching (IPS) ones was visually assessed at two levels of ambient lighting conditions corresponding to indoor and outdoor applications, respectively. Naturalness, colorfulness, brightness, contrast, sharpness, and overall image quality were evaluated via psychophysical experiment by categorical judgment method using test images selected from different application categories. The experimental results show that the AMOLED displays perform better on colorfulness because of their wide color gamut, while the high pixel resolution and high peak luminance of the IPS panels help the perception of brightness, contrast, and sharpness. Further statistical analysis of ANOVA indicates that ambient lighting levels have significant influences on the attributes of brightness and contrast.
NASA Technical Reports Server (NTRS)
Full, William E.; Eppler, Duane T.
1993-01-01
The effectivity of multichannel Wiener filters to improve images obtained with passive microwave systems was investigated by applying Wiener filters to passive microwave images of first-year sea ice. Four major parameters which define the filter were varied: the lag or pixel offset between the original and the desired scenes, filter length, the number of lines in the filter, and the weight applied to the empirical correlation functions. The effect of each variable on the image quality was assessed by visually comparing the results. It was found that the application of multichannel Wiener theory to passive microwave images of first-year sea ice resulted in visually sharper images with enhanced textural features and less high-frequency noise. However, Wiener filters induced a slight blocky grain to the image and could produce a type of ringing along scan lines traversing sharp intensity contrasts.
Measuring saliency in images: which experimental parameters for the assessment of image quality?
NASA Astrophysics Data System (ADS)
Fredembach, Clement; Woolfe, Geoff; Wang, Jue
2012-01-01
Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a set of parameters, tasks and images that can be used to compare the various saliency prediction methods in a manner that is meaningful for image quality assessment.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
Multiple Image Arrangement for Subjective Quality Assessment
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhai, Guangtao
2017-12-01
Subjective quality assessment serves as the foundation for almost all visual quality related researches. Size of the image quality databases has expanded from dozens to thousands in the last decades. Since each subjective rating therein has to be averaged over quite a few participants, the ever-increasing overall size of those databases calls for an evolution of existing subjective test methods. Traditional single/double stimulus based approaches are being replaced by multiple image tests, where several distorted versions of the original one are displayed and rated at once. And this naturally brings upon the question of how to arrange those multiple images on screen during the test. In this paper, we answer this question by performing subjective viewing test with eye tracker for different types arrangements. Our research indicates that isometric arrangement imposes less duress on participants and has more uniform distribution of eye fixations and movements and therefore is expected to generate more reliable subjective ratings.
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Characteristics of flight simulator visual systems
NASA Technical Reports Server (NTRS)
Statler, I. C. (Editor)
1981-01-01
The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality.
Strauss, Rupert W; Krieglstein, Tina R; Priglinger, Siegfried G; Reis, Werner; Ulbig, Michael W; Kampik, Anselm; Neubauer, Aljoscha S
2007-11-01
To establish a set of quality parameters for grading image quality and apply those to evaluate the fundus image quality obtained by a new scanning digital ophthalmoscope (SDO) compared with standard slide photography. On visual analogue scales a total of eight image characteristics were defined: overall quality, contrast, colour brilliance, focus (sharpness), resolution and details, noise, artefacts and validity of clinical assessment. Grading was repeated after 4 months to assess repeatability. Fundus images of 23 patients imaged digitally by SDO and by Zeiss 450FF fundus camera using Kodak film were graded side-by-side by three graders. Lens opacity was quantified with the Interzeag Lens Opacity Meter 701. For all of the eight scales of image quality, good repeatability within the graders (mean Kendall's W 0.69) was obtained after 4 months. Inter-grader agreement ranged between 0.31 and 0.66. Despite the SDO's limited nominal image resolution of 720 x 576 pixels, the Zeiss FF 450 camera performed better in only two of the subscales - noise (p = 0.001) and artefacts (p = 0.01). Lens opacities significantly influenced only the two subscales 'resolution' and 'details', which deteriorated with increasing media opacities for both imaging systems. Distinct scales to grade image characteristics of different origin were developed and validated. Overall SDO digital imaging was found to provide fundus pictures of a similarly high level of quality as expert photography on slides.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Exploring an optimal wavelet-based filter for cryo-ET imaging.
Huang, Xinrui; Li, Sha; Gao, Song
2018-02-07
Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.
Recursive search method for the image elements of functionally defined surfaces
NASA Astrophysics Data System (ADS)
Vyatkin, S. I.
2017-05-01
This paper touches upon the synthesis of high-quality images in real time and the technique for specifying three-dimensional objects on the basis of perturbation functions. The recursive search method for the image elements of functionally defined objects with the use of graphics processing units is proposed. The advantages of such an approach over the frame-buffer visualization method are shown.
Purgative bowel cleansing combined with simethicone improves capsule endoscopy imaging.
Wei, Wei; Ge, Zhi-Zheng; Lu, Hong; Gao, Yun-Jie; Hu, Yun-Biao; Xiao, Shu-Dong
2008-01-01
To evaluate the effects of the various methods of small bowel preparation on the quality of visualization of the small bowel and the gastrointestinal transit time of capsule endoscopy (CE). Ninety patients referred for CE were prospectively randomized to three equal groups according to the preparation used: (a) a control group, in which patients were requested to drink 1 L of clear liquids only, 12 h before the examination; (b) a purgative group, in which patients were requested to ingest 1 L of a polyethylene glycol (PEG)/electrolyte solution only, 12 h before the examination; or (c) a purgative combined with simethicone group (P-S group), in which patients were requested to ingest 1 L of PEG, 12 h before the examination, and 300 mg of simethicone, 20 min before the examination. Effects of the different bowel preparations on the gastric transit time (GTT), small bowel transit time (SBTT), examination completion rate, quality of images of the entire small intestine, and cleansing of the proximal small bowel and distal ileum were evaluated. The number of patients with "adequate" cleansing of the entire small intestine was 17 in the P-S group, 12 in the purgative group, and seven in the control group (P= 0.002). The P-S group had significantly better image quality than the control group (P= 0.001). The P-S group had significantly better image quality for the proximal small bowel (segment A [Seg A]) than the control group (P= 0.0001). Both the P-S group (P= 0.0001) and the purgative group (P= 0.0002) had significantly better image quality for the distal ileum (segment B [Seg B]) than the control group; the P-S group had significantly better image quality than the purgative group as well (P= 0.0121). Gastrointestinal transit time was not different among the three groups, nor was the examination completion rate. Purgative bowel cleansing combined with simethicone before CE improved the quality of imaging of the entire small bowel as well as the visualization of the mucosa in the proximal and distal small intestine.
Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M
2006-02-01
Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.
USDA-ARS?s Scientific Manuscript database
Cracks in the egg shell increase a food safety risk. Especially, eggs with very fine, hairline cracks (micro-cracks) are often undetected during the grading process because they are almost impossible to detect visually. A modified pressure imaging system was developed to detect eggs with micro-crack...
Moreno-Martínez, Francisco Javier; Montoro, Pedro R.
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object-and word- processing, both in neurological patients and in healthy controls. PMID:22662166
Hirokawa, Yuusuke; Isoda, Hiroyoshi; Maetani, Yoji S; Arizono, Shigeki; Shimada, Kotaro; Togashi, Kaori
2008-10-01
The purpose of this study was to evaluate the effectiveness of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER [BLADE in the MR systems from Siemens Medical Solutions]) with a respiratory compensation technique for motion correction, image noise reduction, improved sharpness of liver edge, and image quality of the upper abdomen. Twenty healthy adult volunteers with a mean age of 28 years (age range, 23-42 years) underwent upper abdominal MRI with a 1.5-T scanner. For each subject, fat-saturated T2-weighted turbo spin-echo (TSE) sequences with respiratory compensation (prospective acquisition correction [PACE]) were performed with and without the BLADE technique. Ghosting artifact, artifacts except ghosting artifact such as respiratory motion and bowel movement, sharpness of liver edge, image noise, and overall image quality were evaluated visually by three radiologists using a 5-point scale for qualitative analysis. The Wilcoxon's signed rank test was used to determine whether a significant difference existed between images with and without BLADE. A p value less than 0.05 was considered to be statistically significant. In the BLADE images, image artifacts, sharpness of liver edge, image noise, and overall image quality were significantly improved (p < 0.001). With the BLADE technique, T2-weighted TSE images of the upper abdomen could provide reduced image artifacts including ghosting artifact and image noise and provide better image quality.
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
Water and fat separation in real-time MRI of joint movement with phase-sensitive bSSFP.
Mazzoli, Valentina; Nederveen, Aart J; Oudeman, Jos; Sprengers, Andre; Nicolay, Klaas; Strijkers, Gustav J; Verdonschot, Nico
2017-07-01
To introduce a method for obtaining fat-suppressed images in real-time MRI of moving joints at 3 Tesla (T) using a bSSFP sequence with phase detection to enhance visualization of soft tissue structures during motion. The wrist and knee of nine volunteers were imaged with a real-time bSSFP sequence while performing dynamic tasks. For appropriate choice of sequence timing parameters, water and fat pixels showed an out-of-phase behavior, which was exploited to reconstruct water and fat images. Additionally, a 2-point Dixon sequence was used for dynamic imaging of the joints, and resulting water and fat images were compared with our proposed method. The joints could be visualized with good water-fat separation and signal-to-noise ratio (SNR), while maintaining a relatively high temporal resolution (5 fps in knee imaging and 10 fps in wrist imaging). The proposed method produced images of moving joints with higher SNR and higher image quality when compared with the Dixon method. Water-fat separation is feasible in real-time MRI of moving knee and wrist at 3 T. PS-bSSFP offers movies with higher SNR and higher diagnostic quality when compared with Dixon scans. Magn Reson Med 78:58-68, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A
1999-08-01
Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.
NASA Astrophysics Data System (ADS)
Qiu, Guoping; Kheiri, Ahmed
2011-01-01
Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.
Shenoy, Shailesh M
2016-07-01
A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software's support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity.
A Perceptually Weighted Rank Correlation Indicator for Objective Image Quality Assessment
NASA Astrophysics Data System (ADS)
Wu, Qingbo; Li, Hongliang; Meng, Fanman; Ngan, King N.
2018-05-01
In the field of objective image quality assessment (IQA), the Spearman's $\\rho$ and Kendall's $\\tau$ are two most popular rank correlation indicators, which straightforwardly assign uniform weight to all quality levels and assume each pair of images are sortable. They are successful for measuring the average accuracy of an IQA metric in ranking multiple processed images. However, two important perceptual properties are ignored by them as well. Firstly, the sorting accuracy (SA) of high quality images are usually more important than the poor quality ones in many real world applications, where only the top-ranked images would be pushed to the users. Secondly, due to the subjective uncertainty in making judgement, two perceptually similar images are usually hardly sortable, whose ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, we explore a perceptually weighted rank correlation indicator in this paper, which rewards the capability of correctly ranking high quality images, and suppresses the attention towards insensitive rank mistakes. More specifically, we focus on activating `valid' pairwise comparison towards image quality, whose difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned an unique weight, which is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient. The proposed indicator offers a new insight for interpreting visual perception behaviors. Furthermore, the applicability of our indicator is validated in recommending robust IQA metrics for both the degraded and enhanced image data.
Honda, O; Yanagawa, M; Inoue, A; Kikuyama, A; Yoshida, S; Sumikawa, H; Tobino, K; Koyama, M; Tomiyama, N
2011-04-01
We investigated the image quality of multiplanar reconstruction (MPR) using adaptive statistical iterative reconstruction (ASIR). Inflated and fixed lungs were scanned with a garnet detector CT in high-resolution mode (HR mode) or non-high-resolution (HR) mode, and MPR images were then reconstructed. Observers compared 15 MPR images of ASIR (40%) and ASIR (80%) with those of ASIR (0%), and assessed image quality using a visual five-point scale (1, definitely inferior; 5, definitely superior), with particular emphasis on normal pulmonary structures, artefacts, noise and overall image quality. The mean overall image quality scores in HR mode were 3.67 with ASIR (40%) and 4.97 with ASIR (80%). Those in non-HR mode were 3.27 with ASIR (40%) and 3.90 with ASIR (80%). The mean artefact scores in HR mode were 3.13 with ASIR (40%) and 3.63 with ASIR (80%), but those in non-HR mode were 2.87 with ASIR (40%) and 2.53 with ASIR (80%). The mean scores of the other parameters were greater than 3, whereas those in HR mode were higher than those in non-HR mode. There were significant differences between ASIR (40%) and ASIR (80%) in overall image quality (p<0.01). Contrast medium in the injection syringe was scanned to analyse image quality; ASIR did not suppress the severe artefacts of contrast medium. In general, MPR image quality with ASIR (80%) was superior to that with ASIR (40%). However, there was an increased incidence of artefacts by ASIR when CT images were obtained in non-HR mode.
Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.
Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel
2017-07-28
New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.
Nölte, Ingo S; Gerigk, Lars; Al-Zghloul, Mansour; Groden, Christoph; Kerl, Hans U
2012-03-01
Deep-brain stimulation (DBS) of the internal globus pallidus (GPi) has shown remarkable therapeutic benefits for treatment-resistant neurological disorders including dystonia and Parkinson's disease (PD). The success of the DBS is critically dependent on the reliable visualization of the GPi. The aim of the study was to evaluate promising 3.0 Tesla magnetic resonance imaging (MRI) methods for pre-stereotactic visualization of the GPi using a standard installation protocol. MRI at 3.0 T of nine healthy individuals and of one patient with PD was acquired (FLAIR, T1-MPRAGE, T2-SPACE, T2*-FLASH2D, susceptibility-weighted imaging mapping (SWI)). Image quality and visualization of the GPi for each sequence were assessed by two neuroradiologists independently using a 6-point scale. Axial, coronal, and sagittal planes of the T2*-FLASH2D images were compared. Inter-rater reliability, contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) for the GPi were determined. For illustration, axial T2*-FLASH2D images were fused with a section schema of the Schaltenbrand-Wahren stereotactic atlas. The GPi was best and reliably visualized in axial and to a lesser degree on coronal T2*-FLASH2D images. No major artifacts in the GPi were observed in any of the sequences. SWI offered a significantly higher CNR for the GPi compared to standard T2-weighted imaging using the standard parameters. The fusion of the axial T2*-FLASH2D images and the atlas projected the GPi clearly in the boundaries of the section schema. Using a standard installation protocol at 3.0 T T2*-FLASH2D imaging (particularly axial view) provides optimal and reliable delineation of the GPi.
Beyond image quality: designing engaging interactions with digital products
NASA Astrophysics Data System (ADS)
de Ridder, Huib; Rozendaal, Marco C.
2008-02-01
Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed visually, but other criteria such as enjoyment, fun, engagement and hedonic quality are emerging. This paper deals with engagement, the intrinsically enjoyable readiness to put more effort into exploring and/or using a product than strictly required, thus attracting and keeping user's attention for a longer period of time. The impact of the experienced richness of an interface, both visually and degree of possible manipulations, was investigated in a series of experiments employing game-like user interfaces. This resulted in the extension of an existing conceptual framework relating engagement to richness by means of two intermediating variables, namely experienced challenge and sense of control. Predictions from this revised framework are evaluated against results of an earlier experiment assessing the ergonomic and hedonic qualities of interactive media. Test material consisted of interactive CD-ROM's containing presentations of three companies for future customers.
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young
2014-01-01
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Subjective evaluations of integer cosine transform compressed Galileo solid state imagery
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Gold, Yaron; Grant, Terry; Chuang, Sherry
1994-01-01
This paper describes a study conducted for the Jet Propulsion Laboratory, Pasadena, California, using 15 evaluators from 12 institutions involved in the Galileo Solid State Imaging (SSI) experiment. The objective of the study was to determine the impact of integer cosine transform (ICT) compression using specially formulated quantization (q) tables and compression ratios on acceptability of the 800 x 800 x 8 monochromatic astronomical images as evaluated visually by Galileo SSI mission scientists. Fourteen different images in seven image groups were evaluated. Each evaluator viewed two versions of the same image side by side on a high-resolution monitor; each was compressed using a different q level. First the evaluators selected the image with the highest overall quality to support them in their visual evaluations of image content. Next they rated each image using a scale from one to five indicating its judged degree of usefulness. Up to four preselected types of images with and without noise were presented to each evaluator.
A framework for small infrared target real-time visual enhancement
NASA Astrophysics Data System (ADS)
Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin
2015-03-01
This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.
Image resolution enhancement via image restoration using neural network
NASA Astrophysics Data System (ADS)
Zhang, Shuangteng; Lu, Yihong
2011-04-01
Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Shilemay, Moshe; Rozban, Daniel; Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S; Yadid-Pecht, Orly; Abramovich, Amir
2013-03-01
Inexpensive millimeter-wavelength (MMW) optical digital imaging raises a challenge of evaluating the imaging performance and image quality because of the large electromagnetic wavelengths and pixel sensor sizes, which are 2 to 3 orders of magnitude larger than those of ordinary thermal or visual imaging systems, and also because of the noisiness of the inexpensive glow discharge detectors that compose the focal-plane array. This study quantifies the performances of this MMW imaging system. Its point-spread function and modulation transfer function were investigated. The experimental results and the analysis indicate that the image quality of this MMW imaging system is limited mostly by the noise, and the blur is dominated by the pixel sensor size. Therefore, the MMW image might be improved by oversampling, given that noise reduction is achieved. Demonstration of MMW image improvement through oversampling is presented.
Li, Y; Zheng, G; Lin, H
2014-12-18
To develop a new kind of dental radiographic image quality indicator (IQI) for internal quality of casting metallic restoration to influence on its usage life. Radiographic image quality indicator method was used to evaluate the depth of the defects region and internal quality of 127 casting metallic restoration and the accuracy was compared with that of conventional callipers method. In the 127 cases of casting metallic restoration, 9 were found the thickness less than 0.7 mm and the thinnest thickness only 0.2 mm in 26 casting metallic crowns or bridges' occlusal defects region. The data measured by image quality indicator were consistent with those measured by conventional gauging. Two metal inner crowns were found the thickness less than 0.3 mm in 56 porcelain crowns or bridges. The thickness of casting removable partial denture was more than 1.0 mm, but thinner regions were not found. It was found that in a titanium partial denture, the X-ray image of clasp was not uniform and there were internal porosity defects in the clasp. Special dental image quality indicator can solve the visual error problems caused by different observing backgrounds and estimate the depth of the defects region in the casting.
Zucker, Evan J; Cheng, Joseph Y; Haldipur, Anshul; Carl, Michael; Vasanawala, Shreyas S
2018-01-01
To assess the feasibility and performance of conical k-space trajectory free-breathing ultrashort echo time (UTE) chest magnetic resonance imaging (MRI) versus four-dimensional (4D) flow and effects of 50% data subsampling and soft-gated motion correction. Thirty-two consecutive children who underwent both 4D flow and UTE ferumoxytol-enhanced chest MR (mean age: 5.4 years, range: 6 days to 15.7 years) in one 3T exam were recruited. From UTE k-space data, three image sets were reconstructed: 1) one with all data, 2) one using the first 50% of data, and 3) a final set with soft-gating motion correction, leveraging the signal magnitude immediately after each excitation. Two radiologists in blinded fashion independently scored image quality of anatomical landmarks on a 5-point scale. Ratings were compared using Wilcoxon rank-sum, Wilcoxon signed-ranks, and Kruskal-Wallis tests. Interobserver agreement was assessed with the intraclass correlation coefficient (ICC). For fully sampled UTE, mean scores for all structures were ≥4 (good-excellent). Full UTE surpassed 4D flow for lungs and airways (P < 0.001), with similar pulmonary artery (PA) quality (P = 0.62). 50% subsampling only slightly degraded all landmarks (P < 0.001), as did motion correction. Subsegmental PA visualization was possible in >93% scans for all techniques (P = 0.27). Interobserver agreement was excellent for combined scores (ICC = 0.83). High-quality free-breathing conical UTE chest MR is feasible, surpassing 4D flow for lungs and airways, with equivalent PA visualization. Data subsampling only mildly degraded images, favoring lesser scan times. Soft-gating motion correction overall did not improve image quality. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:200-209. © 2017 International Society for Magnetic Resonance in Medicine.
Improving image quality in laboratory x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.
2017-03-01
Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.
New software developments for quality mesh generation and optimization from biomedical imaging data.
Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko
2014-01-01
In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
NASA Astrophysics Data System (ADS)
Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William
2009-02-01
Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
Bij de Vaate, A J M; Brölmann, H A M; van der Slikke, J W; Emanuel, M H; Huirne, J A F
2010-04-01
To compare gel instillation sonohysterography (GIS) with saline contrast sonohysterography (SCSH) as diagnostic methods for the evaluation of the uterine cavity. A prospective cohort study was performed at the Department of Obstetrics and Gynecology of the VU University Medical Center, Amsterdam, between September 2007 and April 2008. We included 65 women suspected of having an intrauterine abnormality with an indication for SCSH/GIS. First SCSH and subsequently GIS were performed in all women. Distension of the uterine cavity, image quality, visualization of intrauterine abnormalities and pain experienced on a visual analog scale (VAS score) were recorded for both procedures. The mean distension with GIS was 9.0 mm and with SCSH it was 8.5 mm (P = 0.15). The mean image quality, on a scale from 0 to 5, for SCSH was 4.0 and for GIS it was 3.6 (P = 0.01). No difference was found for the visualization of intrauterine abnormalities, and the VAS scores for pain experienced on SCSH and GIS were 1.5 and 1.6, respectively (P = 0.62). The image quality of SCSH is slightly better than that of GIS. This difference is likely to be attributable to the presence of air bubbles in the gel. The small difference in uterine cavity distension in favor of GIS and comparable stable distension during at least 4 min make GIS a suitable alternative for SCSH if air bubbles can be prevented. Copyright 2009 ISUOG. Published by John Wiley & Sons, Ltd.
Task-selective memory effects for successfully implemented encoding strategies.
Leshikar, Eric D; Duarte, Audrey; Hertzog, Christopher
2012-01-01
Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies--visual imagery and sentence generation--facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study.
Task-Selective Memory Effects for Successfully Implemented Encoding Strategies
Leshikar, Eric D.; Duarte, Audrey; Hertzog, Christopher
2012-01-01
Previous behavioral evidence suggests that instructed strategy use benefits associative memory formation in paired associate tasks. Two such effective encoding strategies–visual imagery and sentence generation–facilitate memory through the production of different types of mediators (e.g., mental images and sentences). Neuroimaging evidence suggests that regions of the brain support memory reflecting the mental operations engaged at the time of study. That work, however, has not taken into account self-reported encoding task success (i.e., whether participants successfully generated a mediator). It is unknown, therefore, whether task-selective memory effects specific to each strategy might be found when encoding strategies are successfully implemented. In this experiment, participants studied pairs of abstract nouns under either visual imagery or sentence generation encoding instructions. At the time of study, participants reported their success at generating a mediator. Outside of the scanner, participants further reported the quality of the generated mediator (e.g., images, sentences) for each word pair. We observed task-selective memory effects for visual imagery in the left middle occipital gyrus, the left precuneus, and the lingual gyrus. No such task-selective effects were observed for sentence generation. Intriguingly, activity at the time of study in the left precuneus was modulated by the self-reported quality (vividness) of the generated mental images with greater activity for trials given higher ratings of quality. These data suggest that regions of the brain support memory in accord with the encoding operations engaged at the time of study. PMID:22693593
SU-D-BRF-04: Digital Tomosynthesis for Improved Daily Setup in Treatment of Liver Lesions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, H; Jones, B; Miften, M
Purpose: Daily localization of liver lesions with cone-beam CT (CBCT) is difficult due to poor image quality caused by scatter, respiratory motion, and the lack of radiographic contrast between the liver parenchyma and the lesion(s). Digital tomosynthesis (DTS) is investigated as a modality to improve liver visualization and lesion/parenchyma contrast for daily setup. Methods: An in-house tool was developed to generate DTS images using a point-by-point filtered back-projection method from on-board CBCT projection data. DTS image planes are generated in a user defined orientation to visualize the anatomy at various depths. Reference DTS images are obtained from forward projection ofmore » the planning CT dataset at each projection angle. The CBCT DTS image set can then be registered to the reference DTS image set as a means for localization. Contour data from the planning CT's associate RT Structure file and forward projected similarly to the planning CT data. DTS images are created for each contoured structure, which can then be overlaid onto the DTS images for organ volume visualization. Results: High resolution DTS images generated from CBCT projections show fine anatomical detail, including small blood vessels, within the patient. However, the reference DTS images generated from forward projection of the planning CT lacks this level of detail due to the low resolution of the CT voxels as compared to the pixel size in the projection images; typically 1mm-by-1mm-by-3mm (lat, vrt, lng) for the planning CT vs. 0.4mm-by-0.4mm for CBCT projections. Overlaying of the contours onto the DTS image allows for visualization of structures of interest. Conclusion: The ability to generate DTS images over a limited range of projection angles allows for reduction in the amount of respiratory motion within each acquisition. DTS may provide improved visualization of structures and lesions as compared to CBCT for highly mobile tumors.« less
NASA Astrophysics Data System (ADS)
Garcia, J.; Hidalgo, S. S.; Solis, S. E.; Vazquez, D.; Nuñez, J.; Rodriguez, A. O.
2012-10-01
The susceptibility artifacts can degrade of magnetic resonance image quality. Electrodes are an important source of artifacts when performing brain imaging. A dedicated phantom was built using a depth electrode to study the susceptibility effects under different pulse sequences. T2-weighted images were acquired with both gradient-and spin-echo sequences. The spin-echo sequences can significantly attenuate the susceptibility artifacts allowing a straightforward visualization of the regions surrounding the electrode.
The effect of texture granularity on texture synthesis quality
NASA Astrophysics Data System (ADS)
Golestaneh, S. Alireza; Subedar, Mahesh M.; Karam, Lina J.
2015-09-01
Natural and artificial textures occur frequently in images and in video sequences. Image/video coding systems based on texture synthesis can make use of a reliable texture synthesis quality assessment method in order to improve the compression performance in terms of perceived quality and bit-rate. Existing objective visual quality assessment methods do not perform satisfactorily when predicting the synthesized texture quality. In our previous work, we showed that texture regularity can be used as an attribute for estimating the quality of synthesized textures. In this paper, we study the effect of another texture attribute, namely texture granularity, on the quality of synthesized textures. For this purpose, subjective studies are conducted to assess the quality of synthesized textures with different levels (low, medium, high) of perceived texture granularity using different types of texture synthesis methods.
JPEG vs. JPEG 2000: an objective comparison of image encoding quality
NASA Astrophysics Data System (ADS)
Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan
2004-11-01
This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.
Standardized Uptake Value Ratio-Independent Evaluation of Brain Amyloidosis.
Chincarini, Andrea; Sensi, Francesco; Rei, Luca; Bossert, Irene; Morbelli, Silvia; Guerra, Ugo Paolo; Frisoni, Giovanni; Padovani, Alessandro; Nobili, Flavio
2016-10-18
The assessment of in vivo18F images targeting amyloid deposition is currently carried on by visual rating with an optional quantification based on standardized uptake value ratio (SUVr) measurements. We target the difficulties of image reading and possible shortcomings of the SUVr methods by validating a new semi-quantitative approach named ELBA. ELBA involves a minimal image preprocessing and does not rely on small, specific regions of interest (ROIs). It evaluates the whole brain and delivers a geometrical/intensity score to be used for ranking and dichotomic assessment. The method was applied to adniimages 18F-florbetapir images from the ADNI database. Five expert readers provided visual assessment in blind and open sessions. The longitudinal trend and the comparison to SUVr measurements were also evaluated. ELBA performed with area under the roc curve (AUC) = 0.997 versus the visual assessment. The score was significantly correlated to the SUVr values (r = 0.86, p < 10-4). The longitudinal analysis estimated a test/retest error of ≃2.3%. Cohort and longitudinal analysis suggests that the ELBA method accurately ranks the brain amyloid burden. The expert readers confirmed its relevance in aiding the visual assessment in a significant number (85) of difficult cases. Despite the good performance, poor and uneven image quality constitutes the major limitation.
De Crop, An; Bacher, Klaus; Van Hoof, Tom; Smeets, Peter V; Smet, Barbara S; Vergauwen, Merel; Kiendys, Urszula; Duyck, Philippe; Verstraete, Koenraad; D'Herde, Katharina; Thierens, Hubert
2012-01-01
To determine the correlation between the clinical and physical image quality of chest images by using cadavers embalmed with the Thiel technique and a contrast-detail phantom. The use of human cadavers fulfilled the requirements of the institutional ethics committee. Clinical image quality was assessed by using three human cadavers embalmed with the Thiel technique, which results in excellent preservation of the flexibility and plasticity of organs and tissues. As a result, lungs can be inflated during image acquisition to simulate the pulmonary anatomy seen on a chest radiograph. Both contrast-detail phantom images and chest images of the Thiel-embalmed bodies were acquired with an amorphous silicon flat-panel detector. Tube voltage (70, 81, 90, 100, 113, 125 kVp), copper filtration (0.1, 0.2, 0.3 mm Cu), and exposure settings (200, 280, 400, 560, 800 speed class) were altered to simulate different quality levels. Four experienced radiologists assessed the image quality by using a visual grading analysis (VGA) technique based on European Quality Criteria for Chest Radiology. The phantom images were scored manually and automatically with use of dedicated software, both resulting in an inverse image quality figure (IQF). Spearman rank correlations between inverse IQFs and VGA scores were calculated. A statistically significant correlation (r = 0.80, P < .01) was observed between the VGA scores and the manually obtained inverse IQFs. Comparison of the VGA scores and the automated evaluated phantom images showed an even better correlation (r = 0.92, P < .001). The results support the value of contrast-detail phantom analysis for evaluating clinical image quality in chest radiography. © RSNA, 2011.
Dong, Chun-wang; Zhu, Hong-kai; Zhao, Jie-wen; Jiang, Yong-wen; Yuan, Hai-bo; Chen, Quan-sheng
2017-01-01
Tea is one of the three greatest beverages in the world. In China, green tea has the largest consumption, and needle-shaped green tea, such as Maofeng tea and Sparrow Tongue tea, accounts for more than 40% of green tea (Zhu et al., 2017). The appearance of green tea is one of the important indexes during the evaluation of green tea quality. Especially in market transactions, the price of tea is usually determined by its appearance (Zhou et al., 2012). Human sensory evaluation is usually conducted by experts, and is also easily affected by various factors such as light, experience, psychological and visual factors. In the meantime, people may distinguish the slight differences between similar colors or textures, but the specific levels of the tea are hard to determine (Chen et al., 2008). As human description of color and texture is qualitative, it is hard to evaluate the sensory quality accurately, in a standard manner, and objectively. Color is an important visual property of a computer image (Xie et al., 2014; Khulal et al., 2016); texture is a visual performance of image grayscale and color changing with spatial positions, which can be used to describe the roughness and directivity of the surface of an object (Sanaeifar et al., 2016). There are already researchers who have used computer visual image technologies to identify the varieties, levels, and origins of tea (Chen et al., 2008; Xie et al., 2014; Zhu et al., 2017). Most of their research targets are crush, tear, and curl (CTC) red (green) broken tea, curly green tea (Bilochun tea), and flat-typed green tea (West Lake Dragon-well green tea) as the information sources. However, the target of the above research is to establish a qualitative evaluation method on tea quality (Fu et al., 2013). There is little literature on the sensory evaluation of the appearance quality of needle-shaped green tea, especially research on a quantitative evaluation model (Zhou et al., 2012; Zhu et al., 2017). PMID:28585431
Dong, Chun-Wang; Zhu, Hong-Kai; Zhao, Jie-Wen; Jiang, Yong-Wen; Yuan, Hai-Bo; Chen, Quan-Sheng
2017-06-01
Tea is one of the three greatest beverages in the world. In China, green tea has the largest consumption, and needle-shaped green tea, such as Maofeng tea and Sparrow Tongue tea, accounts for more than 40% of green tea (Zhu et al., 2017). The appearance of green tea is one of the important indexes during the evaluation of green tea quality. Especially in market transactions, the price of tea is usually determined by its appearance (Zhou et al., 2012). Human sensory evaluation is usually conducted by experts, and is also easily affected by various factors such as light, experience, psychological and visual factors. In the meantime, people may distinguish the slight differences between similar colors or textures, but the specific levels of the tea are hard to determine (Chen et al., 2008). As human description of color and texture is qualitative, it is hard to evaluate the sensory quality accurately, in a standard manner, and objectively. Color is an important visual property of a computer image (Xie et al., 2014; Khulal et al., 2016); texture is a visual performance of image grayscale and color changing with spatial positions, which can be used to describe the roughness and directivity of the surface of an object (Sanaeifar et al., 2016). There are already researchers who have used computer visual image technologies to identify the varieties, levels, and origins of tea (Chen et al., 2008; Xie et al., 2014; Zhu et al., 2017). Most of their research targets are crush, tear, and curl (CTC) red (green) broken tea, curly green tea (Bilochun tea), and flat-typed green tea (West Lake Dragon-well green tea) as the information sources. However, the target of the above research is to establish a qualitative evaluation method on tea quality (Fu et al., 2013). There is little literature on the sensory evaluation of the appearance quality of needle-shaped green tea, especially research on a quantitative evaluation model (Zhou et al., 2012; Zhu et al., 2017).
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Multiresolution image gathering and restoration
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link
Rush, Bobby G.; Riley, Robert
2015-09-29
Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.
Improving the visualization of 3D ultrasound data with 3D filtering
NASA Astrophysics Data System (ADS)
Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin
2005-04-01
3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.
Combined use of iterative reconstruction and monochromatic imaging in spinal fusion CT images.
Wang, Fengdan; Zhang, Yan; Xue, Huadan; Han, Wei; Yang, Xianda; Jin, Zhengyu; Zwar, Richard
2017-01-01
Spinal fusion surgery is an important procedure for treating spinal diseases and computed tomography (CT) is a critical tool for postoperative evaluation. However, CT image quality is considerably impaired by metal artifacts and image noise. To explore whether metal artifacts and image noise can be reduced by combining two technologies, adaptive statistical iterative reconstruction (ASIR) and monochromatic imaging generated by gemstone spectral imaging (GSI) dual-energy CT. A total of 51 patients with 318 spinal pedicle screws were prospectively scanned by dual-energy CT using fast kV-switching GSI between 80 and 140 kVp. Monochromatic GSI images at 110 keV were reconstructed either without or with various levels of ASIR (30%, 50%, 70%, and 100%). The quality of five sets of images was objectively and subjectively assessed. With objective image quality assessment, metal artifacts decreased when increasing levels of ASIR were applied (P < 0.001). Moreover, adding ASIR to GSI also decreased image noise (P < 0.001) and improved the signal-to-noise ratio (P < 0.001). The subjective image quality analysis showed good inter-reader concordance, with intra-class correlation coefficients between 0.89 and 0.99. The visualization of peri-implant soft tissue was improved at higher ASIR levels (P < 0.001). Combined use of ASIR and GSI decreased image noise and improved image quality in post-spinal fusion CT scans. Optimal results were achieved with ASIR levels ≥70%. © The Foundation Acta Radiologica 2016.
Sliding window adaptive histogram equalization of intraoral radiographs: effect on image quality.
Sund, T; Møystad, A
2006-05-01
To investigate whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization (SWAHE) can enhance the image quality of intraoral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was first done on an unprocessed radiograph ("single-view"), and then re-done with the image processed with SWAHE displayed beside the unprocessed version ("twin-view"). The processing parameters for SWAHE were the same for all the images. For the periapical examinations, twin-view was judged to raise the image quality for 52% of those cases where the single-view quality was below the maximum. For the bitewing radiographs, there was a change of caries classification (both positive and negative) with twin-view in 19% of the cases, but with only a 3% net increase in the total number of caries registrations. For both examinations interobserver variance was unaffected. Non-interactive SWAHE applied to dental SP radiographs produces a supplemental contrast enhanced image which in twin-view reading improves the image quality of periapical examinations. SWAHE also affects caries diagnosis of bitewing images, and further study using a gold standard is warranted.
Huber, Timothy C; Krishnaraj, Arun; Monaghan, Dayna; Gaskin, Cree M
2018-05-18
Due to mandates from recent legislation, clinical decision support (CDS) software is being adopted by radiology practices across the country. This software provides imaging study decision support for referring providers at the point of order entry. CDS systems produce a large volume of data, providing opportunities for research and quality improvement. In order to better visualize and analyze trends in this data, an interactive data visualization dashboard was created using a commercially available data visualization platform. Following the integration of a commercially available clinical decision support product into the electronic health record, a dashboard was created using a commercially available data visualization platform (Tableau, Seattle, WA). Data generated by the CDS were exported from the data warehouse, where they were stored, into the platform. This allowed for real-time visualization of the data generated by the decision support software. The creation of the dashboard allowed the output from the CDS platform to be more easily analyzed and facilitated hypothesis generation. Integrating data visualization tools into clinical decision support tools allows for easier data analysis and can streamline research and quality improvement efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Paysan, P; Brehm, M
2016-06-15
Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less
Isoda, Hiroyoshi; Furuta, Akihiro; Togashi, Kaori
2015-01-01
Background A 3 Tesla (3 T) magnetic resonance (MR) scanner is a promising tool for upper abdominal MR angiography. However, there is no report focused on the image quality of non-contrast-enhanced MR portography and hepatic venography at 3 T. Purpose To compare and evaluate images of non-contrast-enhanced MR portography and hepatic venography with time-spatial labeling inversion pulses (Time-SLIP) at 1.5 Tesla (1.5 T) and 3 T. Material and Methods Twenty-five healthy volunteers were examined using respiratory-triggered three-dimensional balanced steady-state free-precession (bSSFP) with Time-SLIP. For portography, we used one tagging pulse (selective inversion recovery) and one non-selective inversion recovery pulse; for venography, two tagging pulses were used. The relative signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were quantified, and the quality of visualization was evaluated. Results The CNRs of the main portal vein, right portal vein, and left portal vein at 3 T were better than at 1.5 T. The image quality scores for the portal branches of segment 4, 5, and 8 were significantly higher at 3 T than at 1.5 T. The CNR of the right hepatic vein (RHV) at 3 T was significantly lower than at 1.5 T. The image quality scores of RHV and the middle hepatic vein were higher at 1.5 T than at 3 T. For RHV visualization, the difference was statistically significant. Conclusion Non-contrast-enhanced MR portography with Time-SLIP at 3 T significantly improved visualization of the peripheral branch in healthy volunteers compared with1.5 T. Non-contrast-enhanced MR hepatic venography at 1.5 T was better than at 3 T. PMID:26019890
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faddegon, Bruce A.; Wu, Vincent; Pouliot, Jean
2008-12-15
Megavoltage cone beam computed tomography (MVCBCT) is routinely used for visualizing anatomical structures and implanted fiducials for patient positioning in radiotherapy. MVCBCT using a 6 MV treatment beam with high atomic number (Z) target and flattening filter in the beamline, as done conventionally, has lower image quality than can be achieved with a MV beam due to heavy filtration of the low-energy bremsstrahlung. The unflattened beam of a low Z target has an abundance of diagnostic energy photons, detected with modern flat panel detectors with much higher efficiency given the same dose to the patient. This principle guided the developmentmore » of a new megavoltage imaging beamline (IBL) for a commercial radiotherapy linear accelerator. A carbon target was placed in one of the electron primary scattering foil slots on the target-foil slide. A PROM on a function controller board was programed to put the carbon target in place for MVCBCT. A low accelerating potential of 4.2 MV was used for the IBL to restrict leakage of primary electrons through the target such that dose from x rays dominated the signal in the monitor chamber and the patient surface dose. Results from phantom and cadaver images demonstrated that the IBL had much improved image quality over the treatment beam. For similar imaging dose, the IBL improved the contrast-to-noise ratio by as much as a factor of 3 in soft tissue over that of the treatment beam. The IBL increased the spatial resolution by about a factor of 2, allowing the visualization of finer anatomical details. Images of the cadaver contained useful information with doses as low as 1 cGy. The IBL may be installed on certain models of linear accelerators without mechanical modification and results in significant improvement in the image quality with the same dose, or images of the same quality with less than one-third of the dose.« less
Infrared and visible image fusion scheme based on NSCT and low-level visual features
NASA Astrophysics Data System (ADS)
Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei
2016-05-01
Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.
Spread spectrum image watermarking based on perceptual quality metric.
Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi
2011-11-01
Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.
Super Resolution Algorithm for CCTVs
NASA Astrophysics Data System (ADS)
Gohshi, Seiichi
2015-03-01
Recently, security cameras and CCTV systems have become an important part of our daily lives. The rising demand for such systems has created business opportunities in this field, especially in big cities. Analogue CCTV systems are being replaced by digital systems, and HDTV CCTV has become quite common. HDTV CCTV can achieve images with high contrast and decent quality if they are clicked in daylight. However, the quality of an image clicked at night does not always have sufficient contrast and resolution because of poor lighting conditions. CCTV systems depend on infrared light at night to compensate for insufficient lighting conditions, thereby producing monochrome images and videos. However, these images and videos do not have high contrast and are blurred. We propose a nonlinear signal processing technique that significantly improves visual and image qualities (contrast and resolution) of low-contrast infrared images. The proposed method enables the use of infrared cameras for various purposes such as night shot and poor lighting environments under poor lighting conditions.
Visual just noticeable differences
NASA Astrophysics Data System (ADS)
Nankivil, Derek; Chen, Minghan; Wooley, C. Benjamin
2018-02-01
A visual just noticeable difference (VJND) is the amount of change in either an image (e.g. a photographic print) or in vision (e.g. due to a change in refractive power of a vision correction device or visually coupled optical system) that is just noticeable when compared with the prior state. Numerous theoretical and clinical studies have been performed to determine the amount of change in various visual inputs (power, spherical aberration, astigmatism, etc.) that result in a just noticeable visual change. Each of these approaches, in defining a VJND, relies on the comparison of two visual stimuli. The first stimulus is the nominal or baseline state and the second is the perturbed state that results in a VJND. Using this commonality, we converted each result to the change in the area of the modulation transfer function (AMTF) to provide a more fundamental understanding of what results in a VJND. We performed an analysis of the wavefront criteria from basic optics, the image quality metrics, and clinical studies testing various visual inputs, showing that fractional changes in AMTF resulting in one VJND range from 0.025 to 0.075. In addition, cycloplegia appears to desensitize the human visual system so that a much larger change in the retinal image is required to give a VJND. This finding may be of great import for clinical vision tests. Finally, we present applications of the VJND model for the determination of threshold ocular aberrations and manufacturing tolerances of visually coupled optical systems.
Conceptual design study for an advanced cab and visual system, volume 1
NASA Technical Reports Server (NTRS)
Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.
1980-01-01
A conceptual design study was conducted to define requirements for an advanced cab and visual system. The rotorcraft system integration simulator is for engineering studies in the area of mission associated vehicle handling qualities. Principally a technology survey and assessment of existing and proposed simulator visual display systems, image generation systems, modular cab designs, and simulator control station designs were performed and are discussed. State of the art survey data were used to synthesize a set of preliminary visual display system concepts of which five candidate display configurations were selected for further evaluation. Basic display concepts incorporated in these configurations included: real image projection, using either periscopes, fiber optic bundles, or scanned laser optics; and virtual imaging with helmet mounted displays. These display concepts were integrated in the study with a simulator cab concept employing a modular base for aircraft controls, crew seating, and instrumentation (or other) displays. A simple concept to induce vibration in the various modules was developed and is described. Results of evaluations and trade offs related to the candidate system concepts are given, along with a suggested weighting scheme for numerically comparing visual system performance characteristics.
Bode, Stefan; Bennett, Daniel; Sewell, David K; Paton, Bryan; Egan, Gary F; Smith, Philip L; Murawski, Carsten
2018-03-01
According to sequential sampling models, perceptual decision-making is based on accumulation of noisy evidence towards a decision threshold. The speed with which a decision is reached is determined by both the quality of incoming sensory information and random trial-by-trial variability in the encoded stimulus representations. To investigate those decision dynamics at the neural level, participants made perceptual decisions while functional magnetic resonance imaging (fMRI) was conducted. On each trial, participants judged whether an image presented under conditions of high, medium, or low visual noise showed a piano or a chair. Higher stimulus quality (lower visual noise) was associated with increased activation in bilateral medial occipito-temporal cortex and ventral striatum. Lower stimulus quality was related to stronger activation in posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC). When stimulus quality was fixed, faster response times were associated with a positive parametric modulation of activation in medial prefrontal and orbitofrontal cortex, while slower response times were again related to more activation in PPC, DLPFC and insula. Our results suggest that distinct neural networks were sensitive to the quality of stimulus information, and to trial-to-trial variability in the encoded stimulus representations, but that reaching a decision was a consequence of their joint activity. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effect of tone mapping operators on visual attention deployment
NASA Astrophysics Data System (ADS)
Narwaria, Manish; Perreira Da Silva, Matthieu; Le Callet, Patrick; Pepion, Romuald
2012-10-01
High Dynamic Range (HDR) images/videos require the use of a tone mapping operator (TMO) when visualized on Low Dynamic Range (LDR) displays. From an artistic intention point of view, TMOs are not necessarily transparent and might induce different behavior to view the content. In this paper, we investigate and quantify how TMOs modify visual attention (VA). To that end both objective and subjective tests in the form of eye-tracking experiments have been conducted on several still image content that have been processed by 11 different TMOs. Our studies confirm that TMOs can indeed modify human attention and fixation behavior significantly. Therefore our studies suggest that VA needs consideration for evaluating the overall perceptual impact of TMOs on HDR content. Since the existing studies so far have only considered the quality or aesthetic appeal angle, this study brings in a new perspective regarding the importance of VA in HDR content processing for visualization on LDR displays.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
Optimizing MR imaging-guided navigation for focused ultrasound interventions in the brain
NASA Astrophysics Data System (ADS)
Werner, B.; Martin, E.; Bauer, R.; O'Gorman, R.
2017-03-01
MR imaging during transcranial MR imaging-guided Focused Ultrasound surgery (tcMRIgFUS) is challenging due to the complex ultrasound transducer setup and the water bolus used for acoustic coupling. Achievable image quality in the tcMRIgFUS setup using the standard body coil is significantly inferior to current neuroradiologic standards. As a consequence, MR image guidance for precise navigation in functional neurosurgical interventions using tcMRIgFUS is basically limited to the acquisition of MR coordinates of salient landmarks such as the anterior and posterior commissure for aligning a stereotactic atlas. Here, we show how improved MR image quality provided by a custom built MR coil and optimized MR imaging sequences can support imaging-guided navigation for functional tcMRIgFUS neurosurgery by visualizing anatomical landmarks that can be integrated into the navigation process to accommodate for patient specific anatomy.
NASA Astrophysics Data System (ADS)
Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael
1999-03-01
Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.
Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T
2016-01-01
Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169
Østerås, Bjørn Helge; Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T
2016-08-01
Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor's water phantom. There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between -3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and -7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality.
Honda, O; Yanagawa, M; Inoue, A; Kikuyama, A; Yoshida, S; Sumikawa, H; Tobino, K; Koyama, M; Tomiyama, N
2011-01-01
Objective We investigated the image quality of multiplanar reconstruction (MPR) using adaptive statistical iterative reconstruction (ASIR). Methods Inflated and fixed lungs were scanned with a garnet detector CT in high-resolution mode (HR mode) or non-high-resolution (HR) mode, and MPR images were then reconstructed. Observers compared 15 MPR images of ASIR (40%) and ASIR (80%) with those of ASIR (0%), and assessed image quality using a visual five-point scale (1, definitely inferior; 5, definitely superior), with particular emphasis on normal pulmonary structures, artefacts, noise and overall image quality. Results The mean overall image quality scores in HR mode were 3.67 with ASIR (40%) and 4.97 with ASIR (80%). Those in non-HR mode were 3.27 with ASIR (40%) and 3.90 with ASIR (80%). The mean artefact scores in HR mode were 3.13 with ASIR (40%) and 3.63 with ASIR (80%), but those in non-HR mode were 2.87 with ASIR (40%) and 2.53 with ASIR (80%). The mean scores of the other parameters were greater than 3, whereas those in HR mode were higher than those in non-HR mode. There were significant differences between ASIR (40%) and ASIR (80%) in overall image quality (p<0.01). Contrast medium in the injection syringe was scanned to analyse image quality; ASIR did not suppress the severe artefacts of contrast medium. Conclusion In general, MPR image quality with ASIR (80%) was superior to that with ASIR (40%). However, there was an increased incidence of artefacts by ASIR when CT images were obtained in non-HR mode. PMID:21081572
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan
2015-01-01
Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682
Schlieren technique in soap film flows
NASA Astrophysics Data System (ADS)
Auliel, M. I.; Hebrero, F. Castro; Sosa, R.; Artana, G.
2017-05-01
We propose the use of the Schlieren technique as a tool to analyse the flows in soap film tunnels. The technique enables to visualize perturbations of the film produced by the interposition of an object in the flow. The variations of intensity of the image are produced as a consequence of the deviations of the light beam traversing the deformed surfaces of the film. The quality of the Schlieren image is compared to images produced by the conventional interferometric technique. The analysis of Schlieren images of a cylinder wake flow indicates that this technique enables an easy visualization of vortex centers. Post-processing of series of two successive images of a grid turbulent flow with a dense motion estimator is used to derive the velocity fields. The results obtained with this self-seeded flow show good agreement with the statistical properties of the 2D turbulent flows reported on the literature.
Analyser-based mammography using single-image reconstruction.
Briedis, Dahliyani; Siu, Karen K W; Paganin, David M; Pavlov, Konstantin M; Lewis, Rob A
2005-08-07
We implement an algorithm that is able to decode a single analyser-based x-ray phase-contrast image of a sample, converting it into an equivalent conventional absorption-contrast radiograph. The algorithm assumes the projection approximation for x-ray propagation in a single-material object embedded in a substrate of approximately uniform thickness. Unlike the phase-contrast images, which have both directional bias and a bias towards edges present in the sample, the reconstructed images are directly interpretable in terms of the projected absorption coefficient of the sample. The technique was applied to a Leeds TOR[MAM] phantom, which is designed to test mammogram quality by the inclusion of simulated microcalcifications, filaments and circular discs. This phantom was imaged at varying doses using three modalities: analyser-based synchrotron phase-contrast images converted to equivalent absorption radiographs using our algorithm, slot-scanned synchrotron imaging and imaging using a conventional mammography unit. Features in the resulting images were then assigned a quality score by volunteers. The single-image reconstruction method achieved higher scores at equivalent and lower doses than the conventional mammography images, but no improvement of visualization of the simulated microcalcifications, and some degradation in image quality at reduced doses for filament features.
Regional Principal Color Based Saliency Detection
Lou, Jing; Ren, Mingwu; Wang, Huan
2014-01-01
Saliency detection is widely used in many visual applications like image segmentation, object recognition and classification. In this paper, we will introduce a new method to detect salient objects in natural images. The approach is based on a regional principal color contrast modal, which incorporates low-level and medium-level visual cues. The method allows a simple computation of color features and two categories of spatial relationships to a saliency map, achieving higher F-measure rates. At the same time, we present an interpolation approach to evaluate resulting curves, and analyze parameters selection. Our method enables the effective computation of arbitrary resolution images. Experimental results on a saliency database show that our approach produces high quality saliency maps and performs favorably against ten saliency detection algorithms. PMID:25379960
The potential of pigeons as surrogate observers in medical image perception studies
NASA Astrophysics Data System (ADS)
Krupinski, Elizabeth A.; Levenson, Richard M.; Navarro, Victor; Wasserman, Edward A.
2016-03-01
Assessment of medical image quality and how changes in image appearance impact performance are critical but assessment can be expensive and time-consuming. Could an animal (pigeon) observer with well-known visual skills and documented ability to distinguish complex visual stimuli serve as a surrogate for the human observer? Using sets of whole slide pathology (WSI) and mammographic images we trained pigeons (cohorts of 4) to detect and/or classify lesions in medical images. Standard training methods were used. A chamber equipped with a 15' display with a resistive touchscreen was used to display the images and record responses (pecks). Pigeon pellets were dispensed for correct responses. The pigeons readily learned to distinguish benign from malignant breast cancer histopathology in WSI (mean % correct responses rose 50% to 85% over 15 days) and generalized readily from 4X to 10X and 20X magnifications; to detect microcalcifications (mean % correct responses rose 50% to over 85% over 25 days); to distinguish benign from malignant breast masses (3 of 4 birds learned this task to around 80% and 60% over 10 days); and ignore compression artifacts in WSI (performance with uncompressed slides averaged 95% correct; 15:1 and 27:1 compression slides averaged 92% and 90% correct). Pigeons models may help us better understand medical image perception and may be useful in quality assessment by serving as surrogate observers for certain types of studies.
Improved JPEG anti-forensics with better image visual quality and forensic undetectability.
Singh, Gurinder; Singh, Kulbir
2017-08-01
There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.
Hahn, Wolfram; Fricke-Zech, Susanne; Fialka-Fricke, Julia; Dullin, Christian; Zapf, Antonia; Gruber, Rudolf; Sennhenn-kirchner, Sabine; Kubein-Meesenburg, Dietmar; Sadat-Khonsari, Reza
2009-09-01
An investigation was conducted to compare the image quality of prototype flat-panel volume computed tomography (fpVCT) and multislice computed tomography (MSCT) of suture structures. Bone samples were taken from the midpalatal suture of 5 young (16 weeks) and 5 old (200 weeks) Sus scrofa domestica and fixed in formalin solution. An fpVCT prototype and an MSCT were used to obtain images of the specimens. The facial reformations were assessed by 4 observers using a 1 (excellent) to 5 (poor) rating scale for the weighted criteria visualization of the suture structure. A linear mixed model was used for statistical analysis. Results with P < .05 were considered to be statistically significant. The visualization of the suture of young specimens was significantly better than that of older animals (P < .001). The visualization of the suture with fpVCT was significantly better than that with MSCT (P < .001). Compared with MSCT, fpVCT produces superior results in the visualization of the midpalatal suture in a Sus scrofa domestica model.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos
2012-06-01
When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.
Evaluation of image deblurring methods via a classification metric
NASA Astrophysics Data System (ADS)
Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo
2012-09-01
The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
Bradley, S
1995-01-01
The author explains why pictures have such impact. Images catch people's attention and to some extent can substitute for written words. They can be either still images like posters and flipcharts, three-dimensional images such as models or puppets, or they can show live events through drama, film, and video. Each of these are considered visual aids when used as teaching tools. When choosing visual aids, it is important to know which audience is being addressed and why, and to choose the visual aid which is most appropriate for the occasion. It is very important to pre-test pictures, especially when they will be used on their own without a facilitator to help participants analyze them. While some visual aids, such as maps and diagrams, are understood by everyone, people in some remote areas where there are very few books or papers may find pictures hard to understand. Facilitators are crucial to the successful use of visual aids. It is therefore very important that facilitators receive quality training. Well-trained facilitators from the local area will be more aware of local culture and concerns, and may be more trusted by participants. Poor training must be avoided. Finally, even though pictures can be misinterpreted, visual aids can make teaching and learning more enjoyable for many people. For people who find reading or speaking out difficult, the use of pictures may be the only way they can participate in discussions and decisions.
Improving human object recognition performance using video enhancement techniques
NASA Astrophysics Data System (ADS)
Whitman, Lucy S.; Lewis, Colin; Oakley, John P.
2004-12-01
Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.
Influence of physical parameters on radiation protection and image quality in intra-oral radiology
NASA Astrophysics Data System (ADS)
Belinato, W.; Souza, D. N.
2011-10-01
In the world of diagnostic imaging, radiography is an important supplementary method for dental diagnosis. In radiology, special attention must be paid to the radiological protection of patients and health professionals, and also to image quality for correct diagnosis. In Brazil, the national rules governing the operation of medical and dental radiology were specified in 1998 by the National Sanitary Surveillance Agency, complemented in 2005 by the guide "Medical radiology: security and performance of equipment." In this study, quality control tests were performed in public clinics with dental X-ray equipment in the State of Sergipe, Brazil, with consideration of the physical parameters that influence radiological protection and also the quality of images taken in intra-oral radiography. The accuracy of the exposure time was considered acceptable for equipment with digital timers. Exposure times and focal-spot size variations can lead to increased entrance dose. Increased dose has also been associated with visual processing of radiographic film, which often requires repeating the radiographic examination.
NASA Technical Reports Server (NTRS)
Martin, D. S.; Wang, L.; Laurie, S. S.; Lee, S. M. C.; Fleischer, A. C.; Gibson, C. R.; Stenger, M. B.
2017-01-01
We will address the Human Factors and Performance Team, "Risk of performance errors due to training deficiencies" by improving the JIT training materials for ultrasound and OCT imaging by providing advanced guidance in a detailed, timely, and user-friendly manner. Specifically, we will (1) develop an audio-visual tutorial using AR that guides non-experts through an abdominal trauma ultrasound protocol; (2) develop an audio-visual tutorial using AR to guide an untrained operator through the acquisition of OCT images; (3) evaluate the quality of abdominal ultrasound and OCT images acquired by untrained operators using AR guidance compared to images acquired using traditional JIT techniques (laptop-based training conducted before image acquisition); and (4) compare the time required to complete imaging studies using AR tutorials with images acquired using current JIT practices to identify areas for time efficiency improvements. Two groups of subjects will be recruited to participate in this study. Operator-subjects, without previous experience in ultrasound or OCT, will be asked to perform both procedures using either the JIT training with AR technology or the traditional JIT training via laptop. Images acquired by inexperienced operator-subjects will be scored by experts in that imaging modality for diagnostic and research quality; experts will be blinded to the form of JIT used to acquire the images. Operator-subjects also will be asked to submit feedback to improve the training modules used during the scans to improve future training modules. Scanned-subjects will be a small group individuals from whom all images will be acquired.
NASA Technical Reports Server (NTRS)
Martin, David S.; Wang, Lui; Laurie, Steven S.; Lee, Stuart M. C.; Stenger, Michael B.
2017-01-01
We will address the Human Factors and Performance Team, "Risk of performance errors due to training deficiencies" by improving the JIT training materials for ultrasound and OCT imaging by providing advanced guidance in a detailed, timely, and user-friendly manner. Specifically, we will (1) develop an audio-visual tutorial using AR that guides non-experts through an abdominal trauma ultrasound protocol; (2) develop an audio-visual tutorial using AR to guide an untrained operator through the acquisition of OCT images; (3) evaluate the quality of abdominal ultrasound and OCT images acquired by untrained operators using AR guidance compared to images acquired using traditional JIT techniques (laptop-based training conducted before image acquisition); and (4) compare the time required to complete imaging studies using AR tutorials with images acquired using current JIT practices to identify areas for time efficiency improvements. Two groups of subjects will be recruited to participate in this study. Operator-subjects, without previous experience in ultrasound or OCT, will be asked to perform both procedures using either the JIT training with AR technology or the traditional JIT training via laptop. Images acquired by inexperienced operator-subjects will be scored by experts in that imaging modality for diagnostic and research quality; experts will be blinded to the form of JIT used to acquire the images. Operator-subjects also will be asked to submit feedback to improve the training modules used during the scans to improve future training modules. Scanned-subjects will be a small group individuals from whom all images will be acquired.
Hamm, Klaus D; Surber, Gunnar; Schmücking, Michael; Wurm, Reinhard E; Aschenbach, Rene; Kleinert, Gabriele; Niesen, A; Baum, Richard P
2004-11-01
Innovative new software solutions may enable image fusion to produce the desired data superposition for precise target definition and follow-up studies in radiosurgery/stereotactic radiotherapy in patients with intracranial lesions. The aim is to integrate the anatomical and functional information completely into the radiation treatment planning and to achieve an exact comparison for follow-up examinations. Special conditions and advantages of BrainLAB's fully automatic image fusion system are evaluated and described for this purpose. In 458 patients, the radiation treatment planning and some follow-up studies were performed using an automatic image fusion technique involving the use of different imaging modalities. Each fusion was visually checked and corrected as necessary. The computerized tomography (CT) scans for radiation treatment planning (slice thickness 1.25 mm), as well as stereotactic angiography for arteriovenous malformations, were acquired using head fixation with stereotactic arc or, in the case of stereotactic radiotherapy, with a relocatable stereotactic mask. Different magnetic resonance (MR) imaging sequences (T1, T2, and fluid-attenuated inversion-recovery images) and positron emission tomography (PET) scans were obtained without head fixation. Fusion results and the effects on radiation treatment planning and follow-up studies were analyzed. The precision level of the results of the automatic fusion depended primarily on the image quality, especially the slice thickness and the field homogeneity when using MR images, as well as on patient movement during data acquisition. Fully automated image fusion of different MR, CT, and PET studies was performed for each patient. Only in a few cases was it necessary to correct the fusion manually after visual evaluation. These corrections were minor and did not materially affect treatment planning. High-quality fusion of thin slices of a region of interest with a complete head data set could be performed easily. The target volume for radiation treatment planning could be accurately delineated using multimodal information provided by CT, MR, angiography, and PET studies. The fusion of follow-up image data sets yielded results that could be successfully compared and quantitatively evaluated. Depending on the quality of the originally acquired image, automated image fusion can be a very valuable tool, allowing for fast (approximately 1-2 minute) and precise fusion of all relevant data sets. Fused multimodality imaging improves the target volume definition for radiation treatment planning. High-quality follow-up image data sets should be acquired for image fusion to provide exactly comparable slices and volumetric results that will contribute to quality contol.
Preclinical imaging characteristics and quantification of Platinum-195m SPECT.
Aalbersberg, E A; de Wit-van der Veen, B J; Zwaagstra, O; Codée-van der Schilden, K; Vegt, E; Vogel, Wouter V
2017-08-01
In vivo biodistribution imaging of platinum-based compounds may allow better patient selection for treatment with chemo(radio)therapy. Radiolabeling with Platinum-195m ( 195m Pt) allows SPECT imaging, without altering the chemical structure or biological activity of the compound. We have assessed the feasibility of 195m Pt SPECT imaging in mice, with the aim to determine the image quality and accuracy of quantification for current preclinical imaging equipment. Enriched (>96%) 194 Pt was irradiated in the High Flux Reactor (HFR) in Petten, The Netherlands (NRG). A 0.05 M HCl 195m Pt-solution with a specific activity of 33 MBq/mg was obtained. Image quality was assessed for the NanoSPECT/CT (Bioscan Inc., Washington DC, USA) and U-SPECT + /CT (MILabs BV, Utrecht, the Netherlands) scanners. A radioactivity-filled rod phantom (rod diameter 0.85-1.7 mm) filled with 1 MBq 195m Pt was scanned with different acquisition durations (10-120 min). Four healthy mice were injected intravenously with 3-4 MBq 195m Pt. Mouse images were acquired with the NanoSPECT for 120 min at 0, 2, 4, or 24 h after injection. Organs were delineated to quantify 195m Pt concentrations. Immediately after scanning, the mice were sacrificed, and the platinum concentration was determined in organs using a gamma counter and graphite furnace - atomic absorption spectroscopy (GF-AAS) as reference standards. A 30-min acquisition of the phantom provided visually adequate image quality for both scanners. The smallest visible rods were 0.95 mm in diameter on the NanoSPECT and 0.85 mm in diameter on the U-SPECT + . The image quality in mice was visually adequate. Uptake was seen in the kidneys with excretion to the bladder, and in the liver, blood, and intestine. No uptake was seen in the brain. The Spearman correlation between SPECT and gamma counter was 0.92, between SPECT and GF-AAS it was 0.84, and between GF-AAS and gamma counter it was0.97 (all p < 0.0001). Preclinical 195m Pt SPECT is feasible with acceptable tracer doses and acquisition times, and provides good image quality and accurate signal quantification.
Case studies in machine vision integration
NASA Astrophysics Data System (ADS)
Ahlers, Rolf-Juergen
1991-09-01
Many countries in the world, e.g. Germany and Japan, depend on high export rates. It is therefore necessary for them to strive for a high degree of quality in the products and processes exported. The example of Japan shows in a significant manner that a competitor should not be feared just because he can offer cheaper products. They become a "source of danger" when these products also achieve a high degree of quality. Thus, survival in the market depends on the ability to recognize the implications of technical and economic developments, to draw the perhaps unpopular conclusions for production, and to make the right decisions. This particularly applies to measurement and inspection equipment for quality control. Here, besides electro-optical sensors in general, image processing systems play an important role because they can emulate the conventional form of visual inspection by a human operator — i.e., the methods used in industry when dealing with quality inspection and control. In combination with precision indexing tables and industrial robots, image processing systems can be extended to new fields of application. The great awareness of the potential applications of vision and image processing systems has led to a variety of realized applications, some of which will be described below under three topics: • electro-optical measurement systems, • automation of visual inspection tasks, and • robot guidance.
NASA Astrophysics Data System (ADS)
Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi
2018-04-01
In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy.
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-07-29
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy.
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-01-01
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy. PMID:27471000
High resolution iridocorneal angle imaging system by axicon lens assisted gonioscopy
NASA Astrophysics Data System (ADS)
Perinchery, Sandeep Menon; Shinde, Anant; Fu, Chan Yiu; Jeesmond Hong, Xun Jie; Baskaran, Mani; Aung, Tin; Murukeshan, Vadakke Matham
2016-07-01
Direct visualization and assessment of the iridocorneal angle (ICA) region with high resolution is important for the clinical evaluation of glaucoma. However, the current clinical imaging systems for ICA do not provide sufficient structural details due to their poor resolution. The key challenges in achieving high quality ICA imaging are its location in the anterior region of the eye and the occurrence of total internal reflection due to refractive index difference between cornea and air. Here, we report an indirect axicon assisted gonioscopy imaging probe with white light illumination. The illustrated results with this probe shows significantly improved visualization of structures in the ICA including TM region, compared to the current available tools. It could reveal critical details of ICA and expected to aid management by providing information that is complementary to angle photography and gonioscopy.
Global motion compensated visual attention-based video watermarking
NASA Astrophysics Data System (ADS)
Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
A Practical and Portable Solids-State Electronic Terahertz Imaging System
Smart, Ken; Du, Jia; Li, Li; Wang, David; Leslie, Keith; Ji, Fan; Li, Xiang Dong; Zeng, Da Zhang
2016-01-01
A practical compact solid-state terahertz imaging system is presented. Various beam guiding architectures were explored and hardware performance assessed to improve its compactness, robustness, multi-functionality and simplicity of operation. The system performance in terms of image resolution, signal-to-noise ratio, the electronic signal modulation versus optical chopper, is evaluated and discussed. The system can be conveniently switched between transmission and reflection mode according to the application. A range of imaging application scenarios was explored and images of high visual quality were obtained in both transmission and reflection mode. PMID:27110791
Dowse, Ros; Ramela, Thato; Barford, Kirsty-Lee; Browne, Sara
2010-09-01
The side effects of antiretroviral (ARV) therapy are linked to altered quality of life and adherence. Poor adherence has also been associated with low health-literacy skills, with an uninformed patient more likely to make ARV-related decisions that compromise the efficacy of the treatment. Low literacy skills disempower patients in interactions with healthcare providers and preclude the use of existing written patient information materials, which are generally written at a high reading level. Visual images or pictograms used as a counselling tool or included in patient information leaflets have been shown to improve patients' knowledge, particularly in low-literate groups. The objective of this study was to design visuals or pictograms illustrating various ARV side effects and to evaluate them in a low-literate South African Xhosa population. Core images were generated either from a design workshop or from posed photos or images from textbooks. The research team worked closely with a graphic artist. Initial versions of the images were discussed and assessed in group discussions, and then modified and eventually evaluated quantitatively in individual interviews with 40 participants who each had a maximum of 10 years of schooling. The familiarity of the human body, its facial expressions, postures and actions contextualised the information and contributed to the participants' understanding. Visuals that were simple, had a clear central focus and reflected familiar body experiences (e.g. vomiting) were highly successful. The introduction of abstract elements (e.g. fever) and metaphorical images (e.g. nightmares) presented problems for interpretation, particularly to those with the lowest educational levels. We recommend that such visual images should be designed in collaboration with the target population and a graphic artist, taking cognisance of the audience's literacy skills and culture, and should employ a multistage iterative process of modification and evaluation.
NASA Astrophysics Data System (ADS)
Prades, Cristina; García-Olmo, Juan; Romero-Prieto, Tomás; García de Ceca, José L.; López-Luque, Rafael
2010-06-01
The procedures used today to characterize cork plank for the manufacture of cork bottle stoppers continue to be based on a traditional, manual method that is highly subjective. Furthermore, there is no specific legislation regarding cork classification. The objective of this viability study is to assess the potential of near-infrared spectroscopy (NIRS) technology for characterizing cork plank according to the following variables: aspect or visual quality, porosity, moisture and geographical origin. In order to calculate the porosity coefficient, an image analysis program was specifically developed in Visual Basic language for a desktop scanner. A set comprising 170 samples from two geographical areas of Andalusia (Spain) was classified into eight quality classes by visual inspection. Spectra were obtained in the transverse and tangential sections of the cork planks using an NIRSystems 6500 SY II reflectance spectrophotometer. The quantitative calibrations showed cross-validation coefficients of determination of 0.47 for visual quality, 0.69 for porosity and 0.66 for moisture. The results obtained using NIRS technology are promising considering the heterogeneity and variability of a natural product such as cork in spite of the fact that the standard error of cross validation (SECV) in the quantitative analysis is greater than the standard error of laboratory (SEL) for the three variables. The qualitative analysis regarding geographical origin achieved very satisfactory results. Applying these methods in industry will permit quality control procedures to be automated, as well as establishing correlations between the different classification systems currently used in the sector. These methods can be implemented in the cork chain of custody certification and will also provide a certainly more objective tool for assessing the economic value of the product.
Image feature extraction based on the camouflage effectiveness evaluation
NASA Astrophysics Data System (ADS)
Yuan, Xin; Lv, Xuliang; Li, Ling; Wang, Xinzhu; Zhang, Zhi
2018-04-01
The key step of camouflage effectiveness evaluation is how to combine the human visual physiological features, psychological features to select effectively evaluation indexes. Based on the predecessors' camo comprehensive evaluation method, this paper chooses the suitable indexes combining with the image quality awareness, and optimizes those indexes combining with human subjective perception. Thus, it perfects the theory of index extraction.
Assessment of gunshot bullet injuries with the use of magnetic resonance imaging.
Hess, U; Harms, J; Schneider, A; Schleef, M; Ganter, C; Hannig, C
2000-10-01
Magnetic resonance imaging (MRI) is rarely used for preoperative assessment of shotgun injuries because of concerns of displacing the possibly ferromagnetic foreign body within the surrounding tissue. A total of 56 different projectiles underwent MRI testing for ferromagnetism and imaging quality in vitro and in pig carcasses with a commercially available 1.5-MRI scan. Image quality was compared with that of computed tomographic scans. Projectiles with ferromagnetic properties can be distinguished easily from nonferromagnetic ones by pretesting the motion of an identical projectile within the MRI coil. When ferromagnetic projectiles were excluded, MRI yielded the more precise images compared with other imaging techniques. Projectile localization and associated soft tissue injuries were visualized without artifacts in all cases. When ferromagnetic foreign bodies are excluded by pretesting their properties within the MRI with a comparative projectile, MRI portends an excellent imaging procedure for assessing the extent of injury and planning the removal by surgery.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
WELDSMART: A vision-based expert system for quality control
NASA Technical Reports Server (NTRS)
Andersen, Kristinn; Barnett, Robert Joel; Springfield, James F.; Cook, George E.
1992-01-01
This work was aimed at exploring means for utilizing computer technology in quality inspection and evaluation. Inspection of metallic welds was selected as the main application for this development and primary emphasis was placed on visual inspection, as opposed to other inspection methods, such as radiographic techniques. Emphasis was placed on methodologies with the potential for use in real-time quality control systems. Because quality evaluation is somewhat subjective, despite various efforts to classify discontinuities and standardize inspection methods, the task of using a computer for both inspection and evaluation was not trivial. The work started out with a review of the various inspection techniques that are used for quality control in welding. Among other observations from this review was the finding that most weld defects result in abnormalities that may be seen by visual inspection. This supports the approach of emphasizing visual inspection for this work. Quality control consists of two phases: (1) identification of weld discontinuities (some of which may be severe enough to be classified as defects), and (2) assessment or evaluation of the weld based on the observed discontinuities. Usually the latter phase results in a pass/fail judgement for the inspected piece. It is the conclusion of this work that the first of the above tasks, identification of discontinuities, is the most challenging one. It calls for sophisticated image processing and image analysis techniques, and frequently ad hoc methods have to be developed to identify specific features in the weld image. The difficulty of this task is generally not due to limited computing power. In most cases it was found that a modest personal computer or workstation could carry out most computations in a reasonably short time period. Rather, the algorithms and methods necessary for identifying weld discontinuities were in some cases limited. The fact that specific techniques were finally developed and successfully demosntrated to work illustrates that the general approach taken here appears to be promising for commercial development of computerized quality inspection systems. Inspection based on these techniques may be used to supplement or substitute more elaborate inspection methods, such as x-ray inspections.
Video conference quality assessment based on cooperative sensing of video and audio
NASA Astrophysics Data System (ADS)
Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu
2015-12-01
This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.
NASA Astrophysics Data System (ADS)
Hagen, Charlotte K.; Maghsoudlou, Panagiotis; Totonelli, Giorgia; Diemoz, Paul C.; Endrizzi, Marco; Rigon, Luigi; Menk, Ralf-Hendrik; Arfelli, Fulvia; Dreossi, Diego; Brun, Emmanuel; Coan, Paola; Bravin, Alberto; de Coppi, Paolo; Olivo, Alessandro
2015-12-01
Acellular scaffolds obtained via decellularization are a key instrument in regenerative medicine both per se and to drive the development of future-generation synthetic scaffolds that could become available off-the-shelf. In this framework, imaging is key to the understanding of the scaffolds’ internal structure as well as their interaction with cells and other organs, including ideally post-implantation. Scaffolds of a wide range of intricate organs (esophagus, lung, liver and small intestine) were imaged with x-ray phase contrast computed tomography (PC-CT). Image quality was sufficiently high to visualize scaffold microarchitecture and to detect major anatomical features, such as the esophageal mucosal-submucosal separation, pulmonary alveoli and intestinal villi. These results are a long-sought step for the field of regenerative medicine; until now, histology and scanning electron microscopy have been the gold standard to study the scaffold structure. However, they are both destructive: hence, they are not suitable for imaging scaffolds prior to transplantation, and have no prospect for post-transplantation use. PC-CT, on the other hand, is non-destructive, 3D and fully quantitative. Importantly, not only do we demonstrate achievement of high image quality at two different synchrotron facilities, but also with commercial x-ray equipment, which makes the method available to any research laboratory.
High-resolution MRI of cranial nerves in posterior fossa at 3.0 T.
Guo, Zi-Yi; Chen, Jing; Liang, Qi-Zhou; Liao, Hai-Yan; Cheng, Qiong-Yue; Fu, Shui-Xi; Chen, Cai-Xiang; Yu, Dan
2013-02-01
To evaluate the influence of high-resolution imaging obtainable with the higher field strength of 3.0 T on the visualization of the brain nerves in the posterior fossa. In total, 20 nerves were investigated on MRI of 12 volunteers each and selected for comparison, respectively, with the FSE sequences with 5 mm and 2 mm section thicknesses and gradient recalled echo (GRE) sequences acquired with a 3.0-T scanner. The MR images were evaluated by three independent readers who rated image quality according to depiction of anatomic detail and contrast with use of a rating scale. In general, decrease of the slice thickness showed a significant increase in the detection of nerves as well as in the image quality characteristics. Comparing FSE and GRE imaging, the course of brain nerves and brainstem vessels was visualized best with use of the three-dimensional (3D) pulse sequence. The comparison revealed the clear advantage of a thin section. The increased resolution enabled immediate identification of all brainstem nerves. GRE sequence most distinctly and confidently depicted pertinent structures and enables 3D reconstruction to illustrate complex relations of the brainstem. Copyright © 2013 Hainan Medical College. Published by Elsevier B.V. All rights reserved.
Mobile medical visual information retrieval.
Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning
2012-01-01
In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.
Web-based visualization of very large scientific astronomy imagery
NASA Astrophysics Data System (ADS)
Bertin, E.; Pillay, R.; Marmo, C.
2015-04-01
Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.
Perceptual Contrast Enhancement with Dynamic Range Adjustment
Zhang, Hong; Li, Yuecheng; Chen, Hao; Yuan, Ding; Sun, Mingui
2013-01-01
Recent years, although great efforts have been made to improve its performance, few Histogram equalization (HE) methods take human visual perception (HVP) into account explicitly. The human visual system (HVS) is more sensitive to edges than brightness. This paper proposes to take use of this nature intuitively and develops a perceptual contrast enhancement approach with dynamic range adjustment through histogram modification. The use of perceptual contrast connects the image enhancement problem with the HVS. To pre-condition the input image before the HE procedure is implemented, a perceptual contrast map (PCM) is constructed based on the modified Difference of Gaussian (DOG) algorithm. As a result, the contrast of the image is sharpened and high frequency noise is suppressed. A modified Clipped Histogram Equalization (CHE) is also developed which improves visual quality by automatically detecting the dynamic range of the image with improved perceptual contrast. Experimental results show that the new HE algorithm outperforms several state-of-the-art algorithms in improving perceptual contrast and enhancing details. In addition, the new algorithm is simple to implement, making it suitable for real-time applications. PMID:24339452
Course for undergraduate students: analysis of the retinal image quality of a human eye model
NASA Astrophysics Data System (ADS)
del Mar Pérez, Maria; Yebra, Ana; Fernández-Oliveras, Alicia; Ghinea, Razvan; Ionescu, Ana M.; Cardona, Juan C.
2014-07-01
In teaching of Vision Physics or Physiological Optics, the knowledge and analysis of the aberration that the human eye presents are of great interest, since this information allows a proper evaluation of the quality of the retinal image. The objective of the present work is that the students acquire the required competencies which will allow them to evaluate the optical quality of the human visual system for emmetropic and ammetropic eye, both with and without the optical compensation. For this purpose, an optical system corresponding to the Navarro-Escudero eye model, which allows calculating and evaluating the aberration of this eye model in different ammetropic conditions, was developed employing the OSLO LT software. The optical quality of the visual system will be assessed through determinations of the third and fifth order aberration coefficients, the impact diagram, wavefront analysis, calculation of the Point Spread Function and the Modulation Transfer Function for ammetropic individuals, with myopia or hyperopia, both with or without the optical compensation. This course is expected to be of great interest for student of Optics and Optometry Sciences, last courses of Physics or medical sciences related with human vision.
Adaptive Optics Optical Coherence Tomography in Glaucoma
Dong, Zachary M.; Wollstein, Gadi; Wang, Bo; Schuman, Joel S.
2016-01-01
Since the introduction of commercial optical coherence tomography (OCT) systems, the ophthalmic imaging modality has rapidly expanded and it has since changed the paradigm of visualization of the retina and revolutionized the management and diagnosis of neuro-retinal diseases, including glaucoma. OCT remains a dynamic and evolving imaging modality, growing from time-domain OCT to the improved spectral-domain OCT, adapting novel image analysis and processing methods, and onto the newer swept-source OCT and the implementation of adaptive optics (AO) into OCT. The incorporation of AO into ophthalmic imaging modalities has enhanced OCT by improving image resolution and quality, particularly in the posterior segment of the eye. Although OCT previously captured in-vivo cross-sectional images with unparalleled high resolution in the axial direction, monochromatic aberrations of the eye limit transverse or lateral resolution to about 15-20 μm and reduce overall image quality. In pairing AO technology with OCT, it is now possible to obtain diffraction-limited resolution images of the optic nerve head and retina in three-dimensions, increasing resolution down to a theoretical 3 μm3. It is now possible to visualize discrete structures within the posterior eye, such as photoreceptors, retinal nerve fiber layer bundles, the lamina cribrosa, and other structures relevant to glaucoma. Despite its limitations and barriers to widespread commercialization, the expanding role of AO in OCT is propelling this technology into clinical trials and onto becoming an invaluable modality in the clinician's arsenal. PMID:27916682
Dai, Weiying; Soman, Salil; Hackney, David B.; Wong, Eric T.; Robson, Philip M.; Alsop, David C.
2017-01-01
Functional imaging provides hemodynamic and metabolic information and is increasingly being incorporated into clinical diagnostic and research studies. Typically functional images have reduced signal-to-noise ratio and spatial resolution compared to other non-functional cross sectional images obtained as part of a routine clinical protocol. We hypothesized that enhancing visualization and interpretation of functional images with anatomic information could provide preferable quality and superior diagnostic value. In this work, we implemented five methods (frequency addition, frequency multiplication, wavelet transform, non-subsampled contourlet transform and intensity-hue-saturation) and a newly proposed ShArpening by Local Similarity with Anatomic images (SALSA) method to enhance the visualization of functional images, while preserving the original functional contrast and quantitative signal intensity characteristics over larger spatial scales. Arterial spin labeling blood flow MR images of the brain were visualization enhanced using anatomic images with multiple contrasts. The algorithms were validated on a numerical phantom and their performance on images of brain tumor patients were assessed by quantitative metrics and neuroradiologist subjective ratings. The frequency multiplication method had the lowest residual error for preserving the original functional image contrast at larger spatial scales (55%–98% of the other methods with simulated data and 64%–86% with experimental data). It was also significantly more highly graded by the radiologists (p<0.005 for clear brain anatomy around the tumor). Compared to other methods, the SALSA provided 11%–133% higher similarity with ground truth images in the simulation and showed just slightly lower neuroradiologist grading score. Most of these monochrome methods do not require any prior knowledge about the functional and anatomic image characteristics, except the acquired resolution. Hence, automatic implementation on clinical images should be readily feasible. PMID:27723582
SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S; Dolly, S; Cai, B
Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less
Generation of complementary sampled phase-only holograms.
Tsang, P W M; Chow, Y T; Poon, T-C
2016-10-03
If an image is uniformly down-sampled into a sparse form and converted into a hologram, the phase component alone will be adequate to reconstruct the image. However, the appearance of the reconstructed image is degraded with numerous empty holes. In this paper, we present a low complexity and non-iterative solution to this problem. Briefly, two phase-only holograms are generated for an image, each based on a different down-sampling lattice. Subsequently, the holograms are displayed alternately at high frame rate. The reconstructed images of the 2 holograms will appear to be a single, densely sampled image with enhance visual quality.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Developing and evaluating a target-background similarity metric for camouflage detection.
Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong
2014-01-01
Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-11-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-01-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857
Low-Dose CT of the Paranasal Sinuses: Minimizing X-Ray Exposure with Spectral Shaping.
Wuest, Wolfgang; May, Matthias; Saake, Marc; Brand, Michael; Uder, Michael; Lell, Michael
2016-11-01
Shaping the energy spectrum of the X-ray beam has been shown to be beneficial in low-dose CT. This study's aim was to investigate dose and image quality of tin filtration at 100 kV for pre-operative planning in low-dose paranasal CT imaging in a large patient cohort. In a prospective trial, 129 patients were included. 64 patients were randomly assigned to the study protocol (100 kV with additional tin filtration, 150mAs, 192x0.6-mm slice collimation) and 65 patients to the standard low-dose protocol (100 kV, 50mAs, 128 × 0.6-mm slice collimation). To assess the image quality, subjective parameters were evaluated using a five-point scale. This scale was applied on overall image quality and contour delineation of critical anatomical structures. All scans were of diagnostic image quality. Bony structures were of good diagnostic image quality in both groups, soft tissues were of sufficient diagnostic image quality in the study group because of a high level of noise. Radiation exposure was very low in both groups, but significantly lower in the study group (CTDI vol 1.2 mGy vs. 4.4 mGy, p < 0.001). Spectral optimization (tin filtration at 100 kV) allows for visualization of the paranasal sinus with sufficient image quality at a very low radiation exposure. • Spectral optimization (tin filtration) is beneficial to low-dose parasinus CT • Tin filtration at 100 kV yields sufficient image quality for pre-operative planning • Diagnostic parasinus CT can be performed with an effective dose <0.05 mSv.
Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.
Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei
2014-02-01
Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.
Visual communication with retinex coding.
Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R
2000-04-10
Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.
Visual Communication with Retinex Coding
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel
2000-04-01
Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.
Aziz, A; Dar, P; Hughes, F; Solorzano, C; Muller, M M; Salmon, C; Salmon, M; Benfield, N
2018-01-12
To evaluate the quality of ultrasound images obtained with cassava flour slurry (CFS) compared with conventional gel in order to determine objectively whether CFS could be a true low-cost alternative. Blinded non-inferiority trial. Obstetrical ultrasound unit in an academic medical centre. Women with a singleton pregnancy, undergoing anatomy ultrasounds. Thirty pregnant women had standard biometry measures obtained with CFS and conventional gel. Images were compared side-by-side in random order by two blinded sonologists and rated for image resolution, detail and total image quality using a 10-cm visual analogue scale. Ratings were compared using paired t-tests. Participant and sonographer experience was measured using five-point Likert scales. Image resolution, detail, and total image quality. Participant experience of gel regarding irritation, messiness, and ease of removal. We found no significant difference between perceived image quality obtained with CFS (mean = 6.2, SD = 1.2) and commercial gel (mean = 6.4, SD = 1.2) [t (28) = -1.1; P = 0.3]. Images were not rated significantly differently for either reviewer in any measure, any standardized image or any view of a specific anatomic structure. All five sonographers rated CFS as easy to obtain clear images and easy for patient and machine cleanup. Only one participant reported itching with CFS. CFS produces comparable image quality to commercial ultrasound gel. The dissemination of these results and the simple CFS recipe could significantly increase access to ultrasound for screening, monitoring and diagnostic purposes in resource-limited settings. This study was internally funded by our department. Low-cost homemade cassava flour slurry creates images equal to commercial ultrasound gel, improving access. © 2018 Royal College of Obstetricians and Gynaecologists.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
Cost-effective handling of digital medical images in the telemedicine environment.
Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel
2007-09-01
This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd
Exline, David L; Wallace, Christie; Roux, Claude; Lennard, Chris; Nelson, Matthew P; Treado, Patrick J
2003-09-01
Chemical imaging technology is a rapid examination technique that combines molecular spectroscopy and digital imaging, providing information on morphology, composition, structure, and concentration of a material. Among many other applications, chemical imaging offers an array of novel analytical testing methods, which limits sample preparation and provides high-quality imaging data essential in the detection of latent fingerprints. Luminescence chemical imaging and visible absorbance chemical imaging have been successfully applied to ninhydrin, DFO, cyanoacrylate, and luminescent dye-treated latent fingerprints, demonstrating the potential of this technology to aid forensic investigations. In addition, visible absorption chemical imaging has been applied successfully to visualize untreated latent fingerprints.
Motion-blur-compensated structural health monitoring system for tunnels at a speed of 100 km/h
NASA Astrophysics Data System (ADS)
Hayakawa, Tomohiko; Ishikawa, Masatoshi
2017-04-01
High quality images of tunnel surfaces are necessary for visual judgment of abnormal parts. Hence, we propose a monitoring system from a vehicle, which is motion-blur-compensated by the back and forth motion of a galvanometer mirror to offset the vehicle speed, prolong exposure time, and take sharp images including detailed textures. As experimental result of the vehicle-mounted system, we confirmed significant improvements in image quality for a few millimeter-sized ordered black-and-white stripes and cracks, by means of motion blur compensation and prolonged exposure time, under the maximum speed allowed in Japan in a standard tunnel of a highway.
[The 18F-FDG myocardial metabolic imaging in twenty seven pilots with regular aerobic training].
Fang, Ting-Zheng; Zhu, Jia-Rui; Chuan, Ling; Zhao, Wen-Rui; Xu, Gen-Xiang; Yang, Min-Fu; He, Zuo-Xiang
2009-02-01
To evaluate the characteristics of myocardial (18)F-FDG imaging in pilots with regular aerobic exercise training. Twenty seven healthy male pilots with regular aerobic exercise training were included in this study. The subjects were divided into fasting (n = 17) or non-fasting group (n = 10). Fluorine-18-labeled deoxyglucose and Tc-99m-sestamibi dual-nuclide myocardial imaging were obtained at rest and at target heart rate during bicycle ergometer test. The exercise and rest myocardial perfusion imaging were analyzed for myocardial ischemia presence. The myocardial metabolism imaging was analyzed with the visual semi-quantitative analyses model of seventeen segments. The secondary-extreme heart rate (195-age) was achieved in all subjects. There was no myocardial ischemia in all perfusion imaging. In the visual qualitative analyses, four myocardial metabolism imaging failed in the fasting group while one failed in the non-fasting group (P > 0.05). In the visual semi-quantitative analyses, myocardial metabolism imaging scores at rest or exercise in all segments were similar between two groups (P > 0.05). In the fasting group, the myocardial metabolism imaging scores during exercise were significantly higher than those at rest in 6 segments (P < 0.05). In the non-fasting group, the scores of 3 exercise myocardial metabolism imaging were significantly higher than those at rest (P < 0.05). Satisfactory high-quality myocardial metabolism imaging could be obtained at fasting and exercise situations in subjects with regular aerobic exercise.
Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini
2009-01-01
Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.
Katsura, Masaki; Sato, Jiro; Akahane, Masaaki; Mise, Yoko; Sumida, Kaoru; Abe, Osamu
2017-08-01
To compare image quality characteristics of high-resolution computed tomography (HRCT) in the evaluation of interstitial lung disease using three different reconstruction methods: model-based iterative reconstruction (MBIR), adaptive statistical iterative reconstruction (ASIR), and filtered back projection (FBP). Eighty-nine consecutive patients with interstitial lung disease underwent standard-of-care chest CT with 64-row multi-detector CT. HRCT images were reconstructed in 0.625-mm contiguous axial slices using FBP, ASIR, and MBIR. Two radiologists independently assessed the images in a blinded manner for subjective image noise, streak artifacts, and visualization of normal and pathologic structures. Objective image noise was measured in the lung parenchyma. Spatial resolution was assessed by measuring the modulation transfer function (MTF). MBIR offered significantly lower objective image noise (22.24±4.53, P<0.01 among all pairs, Student's t-test) compared with ASIR (39.76±7.41) and FBP (51.91±9.71). MTF (spatial resolution) was increased using MBIR compared with ASIR and FBP. MBIR showed improvements in visualization of normal and pathologic structures over ASIR and FBP, while ASIR was rated quite similarly to FBP. MBIR significantly improved subjective image noise (P<0.01 among all pairs, the sign test), and streak artifacts (P<0.01 each for MBIR vs. the other 2 image data sets). MBIR provides high-quality HRCT images for interstitial lung disease by reducing image noise and streak artifacts and improving spatial resolution compared with ASIR and FBP. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte
2007-01-01
We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.
McManus, I C; Stöver, Katharina; Kim, Do
2011-01-01
In Art and Visual Perception, Rudolf Arnheim, following on from Denman Ross's A Theory of Pure Design, proposed a Gestalt theory of visual composition. The current paper assesses a physicalist interpretation of Arnheim's theory, calculating an image's centre of mass (CoM). Three types of data are used: a large, representative collection of art photographs of recognised quality; croppings by experts and non-experts of photographs; and Ross and Arnheim's procedure of placing a frame around objects such as Arnheim's two black disks. Compared with control images, the CoM of art photographs was closer to an axis (horizontal, vertical, or diagonal), as was the case for photographic croppings. However, stronger, within-image, paired comparison studies, comparing art photographs with the CoM moved on or off an axis (the ‘gamma-ramp study’), or comparing adjacent croppings on or off an axis (the ‘spider-web study’), showed no support for the Arnheim–Ross theory. Finally, studies moving a frame around two disks, of different size, greyness, or background, did not support Arnheim's Gestalt theory. Although the detailed results did not support the Arnheim–Ross theory, several significant results were found which clearly require explanation by any adequate theory of the aesthetics of visual composition. PMID:23145250
McManus, I C; Stöver, Katharina; Kim, Do
2011-01-01
In Art and Visual Perception, Rudolf Arnheim, following on from Denman Ross's A Theory of Pure Design, proposed a Gestalt theory of visual composition. The current paper assesses a physicalist interpretation of Arnheim's theory, calculating an image's centre of mass (CoM). Three types of data are used: a large, representative collection of art photographs of recognised quality; croppings by experts and non-experts of photographs; and Ross and Arnheim's procedure of placing a frame around objects such as Arnheim's two black disks. Compared with control images, the CoM of art photographs was closer to an axis (horizontal, vertical, or diagonal), as was the case for photographic croppings. However, stronger, within-image, paired comparison studies, comparing art photographs with the CoM moved on or off an axis (the 'gamma-ramp study'), or comparing adjacent croppings on or off an axis (the 'spider-web study'), showed no support for the Arnheim-Ross theory. Finally, studies moving a frame around two disks, of different size, greyness, or background, did not support Arnheim's Gestalt theory. Although the detailed results did not support the Arnheim-Ross theory, several significant results were found which clearly require explanation by any adequate theory of the aesthetics of visual composition.
Foveated model observers to predict human performance in 3D images
NASA Astrophysics Data System (ADS)
Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.
2017-03-01
We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.
Negative emotion boosts quality of visual working memory representation.
Xie, Weizhen; Zhang, Weiwei
2016-08-01
Negative emotion impacts a variety of cognitive processes, including working memory (WM). The present study investigated whether negative emotion modulated WM capacity (quantity) or resolution (quality), 2 independent limits on WM storage. In Experiment 1, observers tried to remember several colors over 1-s delay and then recalled the color of a randomly picked memory item by clicking a best-matching color on a continuous color wheel. On each trial, before the visual WM task, 1 of 3 emotion conditions (negative, neutral, or positive) was induced by having observers to rate the valence of an International Affective Picture System image. Visual WM under negative emotion showed enhanced resolution compared with neutral and positive conditions, whereas the number of retained representations was comparable across the 3 emotion conditions. These effects were generalized to closed-contour shapes in Experiment 2. To isolate the locus of these effects, Experiment 3 adopted an iconic memory version of the color recall task by eliminating the 1-s retention interval. No significant change in the quantity or quality of iconic memory was observed, suggesting that the resolution effects in the first 2 experiments were critically dependent on the need to retain memory representations over a short period of time. Taken together, these results suggest that negative emotion selectively boosts visual WM quality, supporting the dissociable nature quantitative and qualitative aspects of visual WM representation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Scientific Visualization Made Easy for the Scientist
NASA Astrophysics Data System (ADS)
Westerhoff, M.; Henderson, B.
2002-12-01
amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the process of increasing image quality and resolution by computationally compensating artifacts of the recording process. amiraDeconv supports 3D wide field microscopy as well as 3D confocal microscopy. It offers both non-blind and blind image deconvolution algorithms. Non-blind deconvolution uses an individual measured point spread function, while non-blind algorithms work on the basis of only a few recording parameters (like numerical aperture or zoom factor). amiraVR is a specialized and extended version of the amira visualization system which is dedicated for use in immersive installations, such as large-screen stereoscopic projections, CAVEr or Holobenchr systems. Among others, it supports multi-threaded multi-pipe rendering, head-tracking, advanced 3D interaction concepts, and 3D menus allowing interaction with any amira object in the same way as on the desktop. With its unique set of features, amiraVR represents both a VR (Virtual Reality) ready application for scientific and medical visualization in immersive environments, and a development platform that allows building VR applications.
Objective measurement of the optical image quality in the human eye
NASA Astrophysics Data System (ADS)
Navarro, Rafael M.
2001-05-01
This communication reviews some recent studies on the optical performance of the human eye. Although the retinal image cannot be recorded directly, different objective methods have been developed, which permit to determine optical quality parameters, such as the Point Spread Function (PSF), the Modulation Transfer Function (MTF), the geometrical ray aberrations or the wavefront distortions, in the living human eye. These methods have been applied in both basic and applied research. This includes the measurement of the optical performance of the eye across visual field, the optical quality of eyes with intraocular lens implants, the aberrations induced by LASIK refractive surgery, or the manufacture of customized phase plates to compensate the wavefront aberration in the eye.
Total generalized variation-regularized variational model for single image dehazing
NASA Astrophysics Data System (ADS)
Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen
2018-04-01
Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.
Multi-view 3D echocardiography compounding based on feature consistency
NASA Astrophysics Data System (ADS)
Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.
2011-09-01
Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.
NASA Astrophysics Data System (ADS)
Umehara, Kensuke; Ota, Junko; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
Single image super-resolution (SR) method can generate a high-resolution (HR) image from a low-resolution (LR) image by enhancing image resolution. In medical imaging, HR images are expected to have a potential to provide a more accurate diagnosis with the practical application of HR displays. In recent years, the super-resolution convolutional neural network (SRCNN), which is one of the state-of-the-art deep learning based SR methods, has proposed in computer vision. In this study, we applied and evaluated the SRCNN scheme to improve the image quality of magnified images in chest radiographs. For evaluation, a total of 247 chest X-rays were sampled from the JSRT database. The 247 chest X-rays were divided into 93 training cases with non-nodules and 152 test cases with lung nodules. The SRCNN was trained using the training dataset. With the trained SRCNN, the HR image was reconstructed from the LR one. We compared the image quality of the SRCNN and conventional image interpolation methods, nearest neighbor, bilinear and bicubic interpolations. For quantitative evaluation, we measured two image quality metrics, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the SRCNN scheme, PSNR and SSIM were significantly higher than those of three interpolation methods (p<0.001). Visual assessment confirmed that the SRCNN produced much sharper edge than conventional interpolation methods without any obvious artifacts. These preliminary results indicate that the SRCNN scheme significantly outperforms conventional interpolation algorithms for enhancing image resolution and that the use of the SRCNN can yield substantial improvement of the image quality of magnified images in chest radiographs.
Retinex based low-light image enhancement using guided filtering and variational framework
NASA Astrophysics Data System (ADS)
Zhang, Shi; Tang, Gui-jin; Liu, Xiao-hua; Luo, Su-huai; Wang, Da-dong
2018-03-01
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization (CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.
Paolicchi, Fabio; Faggioni, Lorenzo; Bastiani, Luca; Molinaro, Sabrina; Puglioli, Michele; Caramella, Davide; Bartolozzi, Carlo
2014-06-01
The purpose of this study was to assess the radiation dose and image quality of pediatric head CT examinations before and after radiologic staff training. Outpatients 1 month to 14 years old underwent 215 unenhanced head CT examinations before and after intensive training of staff radiologists and technologists in optimization of CT technique. Patients were divided into three age groups (0-4, 5-9, and 10-14 years), and CT dose index, dose-length product, tube voltage, and tube current-rotation time product values before and after training were retrieved from the hospital PACS. Gray matter conspicuity and contrast-to-noise ratio before and after training were calculated, and subjective image quality in terms of artifacts, gray-white matter differentiation, noise, visualization of posterior fossa structures, and need for repeat CT examination was visually evaluated by three neuroradiologists. The median CT dose index and dose-length product values were significantly lower after than before training in all age groups (27 mGy and 338 mGy ∙ cm vs 107 mGy and 1444 mGy ∙ cm in the 0- to 4-year-old group, 41 mGy and 483 mGy ∙ cm vs 68 mGy and 976 mGy ∙ cm in the 5- to 9-year-old group, and 51 mGy and 679 mGy ∙ cm vs 107 mGy and 1480 mGy ∙ cm in the 10- to 14-year-old group; p < 0.001). The tube voltage and tube current-time values after training were significantly lower than the levels before training (p < 0.001). Subjective posttraining image quality was not inferior to pretraining levels for any item except noise (p < 0.05), which, however, was never diagnostically unacceptable. Radiologic staff training can be effective in reducing radiation dose while preserving diagnostic image quality in pediatric head CT examinations.
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
Cnn Based Retinal Image Upscaling Using Zero Component Analysis
NASA Astrophysics Data System (ADS)
Nasonov, A.; Chesnakov, K.; Krylov, A.
2017-05-01
The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.
Yanagawa, Masahiro; Hata, Akinori; Honda, Osamu; Kikuchi, Noriko; Miyata, Tomo; Uranishi, Ayumi; Tsukagoshi, Shinsuke; Tomiyama, Noriyuki
2018-05-29
To compare the image quality of the lungs between ultra-high-resolution CT (U-HRCT) and conventional area detector CT (AD-CT) images. Image data of slit phantoms (0.35, 0.30, and 0.15 mm) and 11 cadaveric human lungs were acquired by both U-HRCT and AD-CT devices. U-HRCT images were obtained with three acquisition modes: normal mode (U-HRCT N : 896 channels, 0.5 mm × 80 rows; 512 matrix), super-high-resolution mode (U-HRCT SHR : 1792 channels, 0.25 mm × 160 rows; 1024 matrix), and volume mode (U-HRCT SHR-VOL : non-helical acquisition with U-HRCT SHR ). AD-CT images were obtained with the same conditions as U-HRCT N . Three independent observers scored normal anatomical structures (vessels and bronchi), abnormal CT findings (faint nodules, solid nodules, ground-glass opacity, consolidation, emphysema, interlobular septal thickening, intralobular reticular opacities, bronchovascular bundle thickening, bronchiectasis, and honeycombing), noise, artifacts, and overall image quality on a 3-point scale (1 = worst, 2 = equal, 3 = best) compared with U-HRCT N . Noise values were calculated quantitatively. U-HRCT could depict a 0.15-mm slit. Both U-HRCT SHR and U-HRCT SHR-VOL significantly improved visualization of normal anatomical structures and abnormal CT findings, except for intralobular reticular opacities and reduced artifacts, compared with AD-CT (p < 0.014). Visually, U-HRCT SHR-VOL has less noise than U-HRCT SHR and AD-CT (p < 0.00001). Quantitative noise values were significantly higher in the following order: U-HRCT SHR (mean, 30.41), U-HRCT SHR-VOL (26.84), AD-CT (16.03), and U-HRCT N (15.14) (p < 0.0001). U-HRCT SHR and U-HRCT SHR-VOL resulted in significantly higher overall image quality than AD-CT and were almost equal to U-HRCT N (p < 0.0001). Both U-HRCT SHR and U-HRCT SHR-VOL can provide higher image quality than AD-CT, while U-HRCT SHR-VOL was less noisy than U-HRCT SHR . • Ultra-high-resolution CT (U-HRCT) can improve spatial resolution. • U-HRCT can reduce streak and dark band artifacts. • U-HRCT can provide higher image quality than conventional area detector CT. • In U-HRCT, the volume mode is less noisy than the super-high-resolution mode. • U-HRCT may provide more detailed information about the lung anatomy and pathology.
Jadidi, Masoud; Båth, Magnus; Nyrén, Sven
2018-04-09
To compare the quality of images obtained with two different protocols with different acquisition time and the influence from image post processing in a chest digital tomosynthesis (DTS) system. 20 patients with suspected lung cancer were imaged with a chest X-ray equipment with tomosynthesis option. Two examination protocols with different acquisition times (6.3 and 12 s) were performed on each patient. Both protocols were presented with two different image post-processing (standard DTS processing and more advanced processing optimised for chest radiography). Thus, 4 series from each patient, altogether 80 series, were presented anonymously and in a random order. Five observers rated the quality of the reconstructed section images according to predefined quality criteria in three different classes. Visual grading characteristics (VGC) was used to analyse the data and the area under the VGC curve (AUC VGC ) was used as figure-of-merit. The 12 s protocol and the standard DTS processing were used as references in the analyses. The protocol with 6.3 s acquisition time had a statistically significant advantage over the vendor-recommended protocol with 12 s acquisition time for the classes of criteria, Demarcation (AUC VGC = 0.56, p = 0.009) and Disturbance (AUC VGC = 0.58, p < 0.001). A similar value of AUC VGC was found also for the class Structure (definition of bone structures in the spine) (0.56) but it could not be statistically separated from 0.5 (p = 0.21). For the image processing, the VGC analysis showed a small but statistically significant advantage for the standard DTS processing over the more advanced processing for the classes of criteria Demarcation (AUC VGC = 0.45, p = 0.017) and Disturbance (AUC VGC = 0.43, p = 0.005). A similar value of AUC VGC was found also for the class Structure (0.46), but it could not be statistically separated from 0.5 (p = 0.31). The study indicates that the protocol with 6.3 s acquisition time yields slightly better image quality than the vender-recommended protocol with acquisition time 12 s for several anatomical structures. Furthermore, the standard gradation processing (the vendor-recommended post-processing for DTS), yields to some extent advantage over the gradation processing/multiobjective frequency processing/flexible noise control processing in terms of image quality for all classes of criteria. Advances in knowledge: The study proves that the image quality may be strongly affected by the selection of DTS protocol and that the vendor-recommended protocol may not always be the optimal choice.
Color visual simulation applications at the Defense Mapping Agency
NASA Astrophysics Data System (ADS)
Simley, J. D.
1984-09-01
The Defense Mapping Agency (DMA) produces the Digital Landmass System data base to provide culture and terrain data in support of numerous aircraft simulators. In order to conduct data base and simulation quality control and requirements analysis, DMA has developed the Sensor Image Simulator which can rapidly generate visual and radar static scene digital simulations. The use of color in visual simulation allows the clear portrayal of both landcover and terrain data, whereas the initial black and white capabilities were restricted in this role and thus found limited use. Color visual simulation has many uses in analysis to help determine the applicability of current and prototype data structures to better meet user requirements. Color visual simulation is also significant in quality control since anomalies can be more easily detected in natural appearing forms of the data. The realism and efficiency possible with advanced processing and display technology, along with accurate data, make color visual simulation a highly effective medium in the presentation of geographic information. As a result, digital visual simulation is finding increased potential as a special purpose cartographic product. These applications are discussed and related simulation examples are presented.
[Magnetic resonance imaging of brain tumors].
Prayer, Daniela; Brugger, P C
2002-01-01
Investigating intracranial tumors, different MR-related methods permit not only morphological visualization of lesions but also give insights into their metabolism, resulting in information about the biological qualities of the respective tumor. Magnetic resonance protocols are selected based on the type and timing of onset of clinical signs. Combined information from imaging studies and spectroscopy facilitates the differential diagnosis between blastomatous and non-blastomatous lesions before and after therapy.
Nonlinear Multiscale Transformations: From Synchronization to Error Control
2001-07-01
transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an
Flow visualization for investigating stator losses in a multistage axial compressor
NASA Astrophysics Data System (ADS)
Smith, Natalie R.; Key, Nicole L.
2015-05-01
The methodology and implementation of a powder-paint-based flow visualization technique along with the illuminated flow physics are presented in detail for application in a three-stage axial compressor. While flow visualization often accompanies detailed studies, the turbomachinery literature lacks a comprehensive study which both utilizes flow visualization to interrupt the flow field and explains the intricacies of execution. Lessons learned for obtaining high-quality images of surface flow patterns are discussed in this study. Fluorescent paint is used to provide clear, high-contrast pictures of the recirculation regions on shrouded vane rows. An edge-finding image processing procedure is implemented to provide a quantitative measure of vane-to-vane variability in flow separation, which is approximately 7 % of the suction surface length for Stator 1. Results include images of vane suction side corner separations from all three stages at three loading conditions. Additionally, streakline patterns obtained experimentally are compared with those calculated from computational models. Flow physics associated with vane clocking and increased rotor tip clearance and their implications to stator loss are also investigated with this flow visualization technique. With increased rotor tip clearance, the vane surface flow patterns show a shift to larger separations and more radial flow at the tip. Finally, the effects of instrumentation on the flow field are highlighted.
Platiša, Ljiljana; Brantegem, Leen Van; Kumcu, Asli; Ducatelle, Richard; Philips, Wilfried
2017-01-01
Abstract. Despite the current rapid advance in technologies for whole slide imaging, there is still no scientific consensus on the recommended methodology for image quality assessment of digital pathology slides. For medical images in general, it has been recommended to assess image quality in terms of doctors’ success rates in performing a specific clinical task while using the images (clinical image quality, cIQ). However, digital pathology is a new modality, and already identifying the appropriate task is difficult. In an alternative common approach, humans are asked to do a simpler task such as rating overall image quality (perceived image quality, pIQ), but that involves the risk of nonclinically relevant findings due to an unknown relationship between the pIQ and cIQ. In this study, we explored three different experimental protocols: (1) conducting a clinical task (detecting inclusion bodies), (2) rating image similarity and preference, and (3) rating the overall image quality. Additionally, within protocol 1, overall quality ratings were also collected (task-aware pIQ). The experiments were done by diagnostic veterinary pathologists in the context of evaluating the quality of hematoxylin and eosin-stained digital pathology slides of animal tissue samples under several common image alterations: additive noise, blurring, change in gamma, change in color saturation, and JPG compression. While the size of our experiments was small and prevents drawing strong conclusions, the results suggest the need to define a clinical task. Importantly, the pIQ data collected under protocols 2 and 3 did not always rank the image alterations the same as their cIQ from protocol 1, warning against using conventional pIQ to predict cIQ. At the same time, there was a correlation between the cIQ and task-aware pIQ ratings from protocol 1, suggesting that the clinical experiment context (set by specifying the clinical task) may affect human visual attention and bring focus to their criteria of image quality. Further research is needed to assess whether and for which purposes (e.g., preclinical testing) task-aware pIQ ratings could substitute cIQ for a given clinical task. PMID:28653011
Platiša, Ljiljana; Brantegem, Leen Van; Kumcu, Asli; Ducatelle, Richard; Philips, Wilfried
2017-04-01
Despite the current rapid advance in technologies for whole slide imaging, there is still no scientific consensus on the recommended methodology for image quality assessment of digital pathology slides. For medical images in general, it has been recommended to assess image quality in terms of doctors' success rates in performing a specific clinical task while using the images (clinical image quality, cIQ). However, digital pathology is a new modality, and already identifying the appropriate task is difficult. In an alternative common approach, humans are asked to do a simpler task such as rating overall image quality (perceived image quality, pIQ), but that involves the risk of nonclinically relevant findings due to an unknown relationship between the pIQ and cIQ. In this study, we explored three different experimental protocols: (1) conducting a clinical task (detecting inclusion bodies), (2) rating image similarity and preference, and (3) rating the overall image quality. Additionally, within protocol 1, overall quality ratings were also collected (task-aware pIQ). The experiments were done by diagnostic veterinary pathologists in the context of evaluating the quality of hematoxylin and eosin-stained digital pathology slides of animal tissue samples under several common image alterations: additive noise, blurring, change in gamma, change in color saturation, and JPG compression. While the size of our experiments was small and prevents drawing strong conclusions, the results suggest the need to define a clinical task. Importantly, the pIQ data collected under protocols 2 and 3 did not always rank the image alterations the same as their cIQ from protocol 1, warning against using conventional pIQ to predict cIQ. At the same time, there was a correlation between the cIQ and task-aware pIQ ratings from protocol 1, suggesting that the clinical experiment context (set by specifying the clinical task) may affect human visual attention and bring focus to their criteria of image quality. Further research is needed to assess whether and for which purposes (e.g., preclinical testing) task-aware pIQ ratings could substitute cIQ for a given clinical task.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandoval, D; Mlady, G; Selwyn, R
Purpose: To bring together radiologists, technologists, and physicists to utilize post-processing techniques in digital radiography (DR) in order to optimize image acquisition and improve image quality. Methods: Sub-optimal images acquired on a new General Electric (GE) DR system were flagged for follow-up by radiologists and reviewed by technologists and medical physicists. Various exam types from adult musculoskeletal (n=35), adult chest (n=4), and pediatric (n=7) were chosen for review. 673 total images were reviewed. These images were processed using five customized algorithms provided by GE. An image score sheet was created allowing the radiologist to assign a numeric score to eachmore » of the processed images, this allowed for objective comparison to the original images. Each image was scored based on seven properties: 1) overall image look, 2) soft tissue contrast, 3) high contrast, 4) latitude, 5) tissue equalization, 6) edge enhancement, 7) visualization of structures. Additional space allowed for additional comments not captured in scoring categories. Radiologists scored the images from 1 – 10 with 1 being non-diagnostic quality and 10 being superior diagnostic quality. Scores for each custom algorithm for each image set were summed. The algorithm with the highest score for each image set was then set as the default processing. Results: Images placed into the PACS “QC folder” for image processing reasons decreased. Feedback from radiologists was, overall, that image quality for these studies had improved. All default processing for these image types was changed to the new algorithm. Conclusion: This work is an example of the collaboration between radiologists, technologists, and physicists at the University of New Mexico to add value to the radiology department. The significant amount of work required to prepare the processing algorithms, reprocessing and scoring of the images was eagerly taken on by all team members in order to produce better quality images and improve patient care.« less
NASA Astrophysics Data System (ADS)
Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
NASA Astrophysics Data System (ADS)
Siddiqui, Khan M.; Siegel, Eliot L.; Reiner, Bruce I.; Johnson, Jeffrey P.
2005-04-01
The authors identify a fundamental disconnect between the ways in which industry and radiologists assess and even discuss product performance. What is needed is a quantitative methodology that can assess both subjective image quality and observer task performance. In this study, we propose and evaluate the use of a visual discrimination model (VDM) that assesses just-noticeable differences (JNDs) to serve this purpose. The study compares radiologists' subjective perceptions of image quality of computer tomography (CT) and computed radiography (CR) images with quantitative measures of peak signal-to-noise ratio (PSNR) and JNDs as measured by a VDM. The study included 4 CT and 6 CR studies with compression ratios ranging from lossless to 90:1 (total of 80 sets of images were generated [n = 1,200]). Eleven radiologists reviewed the images and rated them in terms of overall quality and readability and identified images not acceptable for interpretation. Normalized reader scores were correlated with compression, objective PSNR, and mean JND values. Results indicated a significantly higher correlation between observer performance and JND values than with PSNR methods. These results support the use of the VDM as a metric not only for the threshold discriminations for which it was calibrated, but also as a general image quality metric. This VDM is a highly promising, reproducible, and reliable adjunct or even alternative to human observer studies for research or to establish clinical guidelines for image compression, dose reductions, and evaluation of various display technologies.
NASA Technical Reports Server (NTRS)
Siostrzonek, Peter; Zangeneh, Massoud; Gossinger, Heinz; Lang, Wilfried; Rosenmayr, Georg; Heinz, Gottfried; Stumpflen, Andreas; Zeiler, Karl; Schwarz, Martin; Mosslacher, Herbert
1991-01-01
Presence of a patent foramen ovale may indicate paradoxic embolism in patients with otherwise unexplained embolic disease. Transthoracic contrast echocardiography has been used as a simple technique for detecting patent foramen ovale. However, particularly in patients with poor transthoracic image quality, presence of a patent foramen ovale might be missed. Transesophageal contrast echocardiography provides superior visualization of the atrial septum and therefore is believed to improve diagnostic accuracy. The present study investigates the influence of image quality on the detection of a patent foramen ovale by both transthoracic and transesophageal contrast echocardiography.
Murakoshi, Takuma; Masuda, Tomohiro; Utsumi, Ken; Tsubota, Kazuo; Wada, Yuji
2013-01-01
Previous studies have reported the effects of statistics of luminance distribution on visual freshness perception using pictures which included the degradation process of food samples. However, these studies did not examine the effect of individual differences between the same kinds of food. Here we elucidate whether luminance distribution would continue to have a significant effect on visual freshness perception even if visual stimuli included individual differences in addition to the degradation process of foods. We took pictures of the degradation of three fishes over 3.29 hours in a controlled environment, then cropped square patches of their eyes from the original images as visual stimuli. Eleven participants performed paired comparison tests judging the visual freshness of the fish eyes at three points of degradation. Perceived freshness scores (PFS) were calculated using the Bradley-Terry Model for each image. The ANOVA revealed that the PFS for each fish decreased as the degradation time increased; however, the differences in the PFS between individual fish was larger for the shorter degradation time, and smaller for the longer degradation time. A multiple linear regression analysis was conducted in order to determine the relative importance of the statistics of luminance distribution of the stimulus images in predicting PFS. The results show that standard deviation and skewness in luminance distribution have a significant influence on PFS. These results show that even if foodstuffs contain individual differences, visual freshness perception and changes in luminance distribution correlate with degradation time.
Performance evaluation of objective quality metrics for HDR image compression
NASA Astrophysics Data System (ADS)
Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic
2014-09-01
Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
Kang, Deqiang; Hua, Haiqin; Peng, Nan; Zhao, Jing; Wang, Zhiqun
2017-04-01
We aim to improve the image quality of coronary computed tomography angiography (CCTA) by using personalized weight and height-dependent scan trigger threshold. This study was divided into two parts. First, we performed and analyzed the 100 scheduled CCTA data, which were acquired by using body mass index-dependent Smart Prep sequence (trigger threshold ranged from 80 Hu to 250 Hu based on body mass index). By identifying the cases of high quality image, a linear regression equation was established to determine the correlation among the Smart Prep threshold, height, and body weight. Furthermore, a quick search table was generated for weight and height-dependent Smart Prep threshold in CCTA scan. Second, to evaluate the effectiveness of the new individual threshold method, an additional 100 consecutive patients were divided into two groups: individualized group (n = 50) with weight and height-dependent threshold and control group (n = 50) with the conventional constant threshold of 150 HU. Image quality was compared between the two groups by measuring the enhancement in coronary artery, aorta, left and right ventricle, and inferior vena cava. By visual inspection, image quality scores were performed to compare between the two groups. Regression equation between Smart Prep threshold (K, Hu), height (H, cm), and body weight (BW, kg) was K = 0.811 × H + 1.917 × BW - 99.341. When compared to the control group, the individualized group presented an average overall increase of 12.30% in enhancement in left main coronary artery, 12.94% in proximal right coronary artery, and 10.6% in aorta. Correspondingly, the contrast-to-noise ratios increased by 26.03%, 27.08%, and 23.17%, respectively, and by 633.1% in contrast between aorta and left ventricle. Meanwhile, the individualized group showed an average overall decrease of 22.7% in enhancement of right ventricle and 32.7% in inferior vena cava. There was no significant difference of the image noise between the two groups (P > .05). By visual inspection, the image quality score of the individualized group was higher than that of the control group. Using personalized weight and height-dependent Smart Prep threshold to adjust scan trigger time can significantly improve the image quality of CCTA. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.