Task-based measures of image quality and their relation to radiation dose and patient risk
Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.
2015-01-01
The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960
NASA Astrophysics Data System (ADS)
Tingberg, Anders Martin
Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.
INCITS W1.1 development update: appearance-based image quality standards for printers
NASA Astrophysics Data System (ADS)
Zeise, Eric K.; Rasmussen, D. René; Ng, Yee S.; Dalal, Edul; McCarthy, Ann; Williams, Don
2008-01-01
In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard. (1),(2) The resulting W1.1 project is based on a proposal (3) that perceived image quality can be described by a small set of broad-based attributes. There are currently six ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress of the teams addressing the attributes of Macro-Uniformity, Colour Rendition, Gloss & Gloss Uniformity, Text & Line Quality and Effective Resolution.
Objective quality assessment for multiexposure multifocus image fusion.
Hassen, Rania; Wang, Zhou; Salama, Magdy M A
2015-09-01
There has been a growing interest in image fusion technologies, but how to objectively evaluate the quality of fused images has not been fully understood. Here, we propose a method for objective quality assessment of multiexposure multifocus image fusion based on the evaluation of three key factors of fused image quality: 1) contrast preservation; 2) sharpness; and 3) structure preservation. Subjective experiments are conducted to create an image fusion database, based on which, performance evaluation shows that the proposed fusion quality index correlates well with subjective scores, and gives a significant improvement over the existing fusion quality measures.
Retinal image quality assessment based on image clarity and content
NASA Astrophysics Data System (ADS)
Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim
2016-09-01
Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.
Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa
2013-01-01
Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796
NASA Astrophysics Data System (ADS)
Dong, Jian; Kudo, Hiroyuki
2017-03-01
Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Image aesthetic quality evaluation using convolution neural network embedded learning
NASA Astrophysics Data System (ADS)
Li, Yu-xin; Pu, Yuan-yuan; Xu, Dan; Qian, Wen-hua; Wang, Li-peng
2017-11-01
A way of embedded learning convolution neural network (ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.
NASA Astrophysics Data System (ADS)
Mubarok, S.; Lubis, L. E.; Pawiro, S. A.
2016-03-01
Compromise between radiation dose and image quality is essential in the use of CT imaging. CT dose index (CTDI) is currently the primary dosimetric formalisms in CT scan, while the low and high contrast resolutions are aspects indicating the image quality. This study was aimed to estimate CTDIvol and image quality measures through a range of exposure parameters variation. CTDI measurements were performed using PMMA (polymethyl methacrylate) phantom of 16 cm diameter, while the image quality test was conducted by using catphan ® 600. CTDI measurements were carried out according to IAEA TRS 457 protocol using axial scan mode, under varied parameters of tube voltage, collimation or slice thickness, and tube current. Image quality test was conducted accordingly under the same exposure parameters with CTDI measurements. An Android™ based software was also result of this study. The software was designed to estimate the value of CTDIvol with maximum difference compared to actual CTDIvol measurement of 8.97%. Image quality can also be estimated through CNR parameter with maximum difference to actual CNR measurement of 21.65%.
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.
2017-06-01
Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.
Blind image quality assessment without training on human opinion scores
NASA Astrophysics Data System (ADS)
Mittal, Anish; Soundararajan, Rajiv; Muralidhar, Gautam S.; Bovik, Alan C.; Ghosh, Joydeep
2013-03-01
We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS), that can predict the subjective quality of a distorted image without reference to a corresponding distortionless image, and without any training results on human opinion scores of distorted images. These `completely blind' models compete well with standard non-blind image quality indices in terms of subjective predictive performance when tested on the large publicly available `LIVE' Image Quality database.
NASA Astrophysics Data System (ADS)
Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan
2014-03-01
We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Image Quality Assessment Using the Joint Spatial/Spatial-Frequency Representation
NASA Astrophysics Data System (ADS)
Beghdadi, Azeddine; Iordache, Răzvan
2006-12-01
This paper demonstrates the usefulness of spatial/spatial-frequency representations in image quality assessment by introducing a new image dissimilarity measure based on 2D Wigner-Ville distribution (WVD). The properties of 2D WVD are shortly reviewed, and the important issue of choosing the analytic image is emphasized. The WVD-based measure is shown to be correlated with subjective human evaluation, which is the premise towards an image quality assessor developed on this principle.
NASA Astrophysics Data System (ADS)
Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.
2018-04-01
A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.
McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R
2007-05-01
The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.
Computer-aided diagnosis based on enhancement of degraded fundus photographs.
Jin, Kai; Zhou, Mei; Wang, Shaoze; Lou, Lixia; Xu, Yufeng; Ye, Juan; Qian, Dahong
2018-05-01
Retinal imaging is an important and effective tool for detecting retinal diseases. However, degraded images caused by the aberrations of the eye can disguise lesions, so that a diseased eye can be mistakenly diagnosed as normal. In this work, we propose a new image enhancement method to improve the quality of degraded images. A new method is used to enhance degraded-quality fundus images. In this method, the image is converted from the input RGB colour space to LAB colour space and then each normalized component is enhanced using contrast-limited adaptive histogram equalization. Human visual system (HVS)-based fundus image quality assessment, combined with diagnosis by experts, is used to evaluate the enhancement. The study included 191 degraded-quality fundus photographs of 143 subjects with optic media opacity. Objective quality assessment of image enhancement (range: 0-1) indicated that our method improved colour retinal image quality from an average of 0.0773 (variance 0.0801) to an average of 0.3973 (variance 0.0756). Following enhancement, area under curves (AUC) were 0.996 for the glaucoma classifier, 0.989 for the diabetic retinopathy (DR) classifier, 0.975 for the age-related macular degeneration (AMD) classifier and 0.979 for the other retinal diseases classifier. The relatively simple method for enhancing degraded-quality fundus images achieves superior image enhancement, as demonstrated in a qualitative HVS-based image quality assessment. This retinal image enhancement may, therefore, be employed to assist ophthalmologists in more efficient screening of retinal diseases and the development of computer-aided diagnosis. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong
2016-12-01
We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.
Assessing product image quality for online shopping
NASA Astrophysics Data System (ADS)
Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq
2012-01-01
Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.
Naturalness and interestingness of test images for visual quality evaluation
NASA Astrophysics Data System (ADS)
Halonen, Raisa; Westman, Stina; Oittinen, Pirkko
2011-01-01
Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.
Predicting perceptual quality of images in realistic scenario using deep filter banks
NASA Astrophysics Data System (ADS)
Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang
2018-03-01
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.
Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel
2017-07-28
New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.
NASA Astrophysics Data System (ADS)
Krishnaswami, Venkataraman; De Luca, Giulia M. R.; Breedijk, Ronald M. P.; Van Noorden, Cornelis J. F.; Manders, Erik M. M.; Hoebe, Ron A.
2017-02-01
Fluorescence microscopy is an important tool in biomedical imaging. An inherent trade-off lies between image quality and photodamage. Recently, we have introduced rescan confocal microscopy (RCM) that improves the lateral resolution of a confocal microscope down to 170 nm. Previously, we have demonstrated that with controlled-light exposure microscopy, spatial control of illumination reduces photodamage without compromising image quality. Here, we show that the combination of these two techniques leads to high resolution imaging with reduced photodamage without compromising image quality. Implementation of spatially-controlled illumination was carried out in RCM using a line scanning-based approach. Illumination is spatially-controlled for every line during imaging with the help of a prediction algorithm that estimates the spatial profile of the fluorescent specimen. The estimation is based on the information available from previously acquired line images. As a proof-of-principle, we show images of N1E-115 neuroblastoma cells, obtained by this new setup with reduced illumination dose, improved resolution and without compromising image quality.
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-01-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images. PMID:29062159
Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian
2017-03-01
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.
Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images
Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina
2017-01-01
Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885
No-reference multiscale blur detection tool for content based image retrieval
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark
2014-06-01
In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Mutic, S; Anastasio, M
Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less
Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image
NASA Astrophysics Data System (ADS)
Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian
2014-07-01
To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.
Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2015-10-01
Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.
Automatic retinal interest evaluation system (ARIES).
Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang
2014-01-01
In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
Wang, Junqiang; Wang, Yu; Zhu, Gang; Chen, Xiangqian; Zhao, Xiangrui; Qiao, Huiting; Fan, Yubo
2018-06-01
Spatial positioning accuracy is a key issue in a computer-assisted orthopaedic surgery (CAOS) system. Since intraoperative fluoroscopic images are one of the most important input data to the CAOS system, the quality of these images should have a significant influence on the accuracy of the CAOS system. But the regularities and mechanism of the influence of the quality of intraoperative images on the accuracy of a CAOS system have yet to be studied. Two typical spatial positioning methods - a C-arm calibration-based method and a bi-planar positioning method - are used to study the influence of different image quality parameters, such as resolution, distortion, contrast and signal-to-noise ratio, on positioning accuracy. The error propagation rules of image error in different spatial positioning methods are analyzed by the Monte Carlo method. Correlation analysis showed that resolution and distortion had a significant influence on spatial positioning accuracy. In addition the C-arm calibration-based method was more sensitive to image distortion, while the bi-planar positioning method was more susceptible to image resolution. The image contrast and signal-to-noise ratio have no significant influence on the spatial positioning accuracy. The result of Monte Carlo analysis proved that generally the bi-planar positioning method was more sensitive to image quality than the C-arm calibration-based method. The quality of intraoperative fluoroscopic images is a key issue in the spatial positioning accuracy of a CAOS system. Although the 2 typical positioning methods have very similar mathematical principles, they showed different sensitivities to different image quality parameters. The result of this research may help to create a realistic standard for intraoperative fluoroscopic images for CAOS systems. Copyright © 2018 John Wiley & Sons, Ltd.
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
Brown, Ryan; Storey, Pippa; Geppert, Christian; McGorty, KellyAnne; Leite, Ana Paula Klautau; Babb, James; Sodickson, Daniel K; Wiggins, Graham C; Moy, Linda
2013-11-01
To evaluate the image quality of T1-weighted fat-suppressed breast MRI at 7 T and to compare 7-T and 3-T images. Seventeen subjects were imaged using a 7-T bilateral transmit-receive coil and 3D gradient echo sequence with adiabatic inversion-based fat suppression (FS). Images were graded on a five-point scale and quantitatively assessed through signal-to-noise ratio (SNR), fibroglandular/fat contrast and signal uniformity measurements. Image scores at 7 and 3 T were similar on standard-resolution images (1.1 × 1.1 × 1.1-1.6 mm(3)), indicating that high-quality breast imaging with clinical parameters can be performed at 7 T. The 7-T SNR advantage was underscored on 0.6-mm isotropic images, where image quality was significantly greater than at 3 T (4.2 versus 3.1, P ≤ 0.0001). Fibroglandular/fat contrast was more than two times higher at 7 T than at 3 T, owing to effective adiabatic inversion-based FS and the inherent 7-T signal advantage. Signal uniformity was comparable at 7 and 3 T (P < 0.05). Similar 7-T image quality was observed in all subjects, indicating robustness against anatomical variation. The 7-T bilateral transmit-receive coil and adiabatic inversion-based FS technique produce image quality that is as good as or better than at 3 T. • High image quality bilateral breast MRI is achievable with clinical parameters at 7 T. • 7-T high-resolution imaging improves delineation of subtle soft tissue structures. • Adiabatic-based fat suppression provides excellent fibroglandular/fat contrast at 7 T. • 7- and 3-T 3D T1-weighted gradient-echo images have similar signal uniformity. • The 7-T dual solenoid coil enables bilateral imaging without compromising uniformity.
Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
Aurumskjöld, Marie-Louise; Ydström, Kristina; Tingberg, Anders; Söderberg, Marcus
2017-01-01
The number of computed tomography (CT) examinations is increasing and leading to an increase in total patient exposure. It is therefore important to optimize CT scan imaging conditions in order to reduce the radiation dose. The introduction of iterative reconstruction methods has enabled an improvement in image quality and a reduction in radiation dose. To investigate how image quality depends on reconstruction method and to discuss patient dose reduction resulting from the use of hybrid and model-based iterative reconstruction. An image quality phantom (Catphan® 600) and an anthropomorphic torso phantom were examined on a Philips Brilliance iCT. The image quality was evaluated in terms of CT numbers, noise, noise power spectra (NPS), contrast-to-noise ratio (CNR), low-contrast resolution, and spatial resolution for different scan parameters and dose levels. The images were reconstructed using filtered back projection (FBP) and different settings of hybrid (iDose 4 ) and model-based (IMR) iterative reconstruction methods. iDose 4 decreased the noise by 15-45% compared with FBP depending on the level of iDose 4 . The IMR reduced the noise even further, by 60-75% compared to FBP. The results are independent of dose. The NPS showed changes in the noise distribution for different reconstruction methods. The low-contrast resolution and CNR were improved with iDose 4 , and the improvement was even greater with IMR. There is great potential to reduce noise and thereby improve image quality by using hybrid or, in particular, model-based iterative reconstruction methods, or to lower radiation dose and maintain image quality. © The Foundation Acta Radiologica 2016.
Coupled dictionary learning for joint MR image restoration and segmentation
NASA Astrophysics Data System (ADS)
Yang, Xuesong; Fan, Yong
2018-03-01
To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.
No-reference image quality assessment based on statistics of convolution feature maps
NASA Astrophysics Data System (ADS)
Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo
2018-04-01
We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.
Improving best-phase image quality in cardiac CT by motion correction with MAM optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl
2013-03-15
Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less
A model for a PC-based, universal-format, multimedia digitization system: moving beyond the scanner.
McEachen, James C; Cusack, Thomas J; McEachen, John C
2003-08-01
Digitizing images for use in case presentations based on hardcopy films, slides, photographs, negatives, books, and videos can present a challenging task. Scanners and digital cameras have become standard tools of the trade. Unfortunately, use of these devices to digitize multiple images in many different media formats can be a time-consuming and in some cases unachievable process. The authors' goal was to create a PC-based solution for digitizing multiple media formats in a timely fashion while maintaining adequate image presentation quality. The authors' PC-based solution makes use of off-the-shelf hardware applications to include a digital document camera (DDC), VHS video player, and video-editing kit. With the assistance of five staff radiologists, the authors examined the quality of multiple image types digitized with this equipment. The authors also quantified the speed of digitization of various types of media using the DDC and video-editing kit. With regard to image quality, the five staff radiologists rated the digitized angiography, CT, and MR images as adequate to excellent for use in teaching files and case presentations. With regard to digitized plain films, the average rating was adequate. As for performance, the authors recognized a 68% improvement in the time required to digitize hardcopy films using the DDC instead of a professional quality scanner. The PC-based solution provides a means for digitizing multiple images from many different types of media in a timely fashion while maintaining adequate image presentation quality.
Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki
2018-05-01
We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Beam Characterization at the Neutron Radiography Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarah Morgan; Jeffrey King
The quality of a neutron imaging beam directly impacts the quality of radiographic images produced using that beam. Fully characterizing a neutron beam, including determination of the beam’s effective length-to-diameter ratio, neutron flux profile, energy spectrum, image quality, and beam divergence, is vital for producing quality radiographic images. This project characterized the east neutron imaging beamline at the Idaho National Laboratory Neutron Radiography Reactor (NRAD). The experiments which measured the beam’s effective length-to-diameter ratio and image quality are based on American Society for Testing and Materials (ASTM) standards. An analysis of the image produced by a calibrated phantom measured themore » beam divergence. The energy spectrum measurements consist of a series of foil irradiations using a selection of activation foils, compared to the results produced by a Monte Carlo n-Particle (MCNP) model of the beamline. Improvement of the existing NRAD MCNP beamline model includes validation of the model’s energy spectrum and the development of enhanced image simulation methods. The image simulation methods predict the radiographic image of an object based on the foil reaction rate data obtained by placing a model of the object in front of the image plane in an MCNP beamline model.« less
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143
Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori
2017-10-01
We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.
Blind CT image quality assessment via deep learning strategy: initial study
NASA Astrophysics Data System (ADS)
Li, Sui; He, Ji; Wang, Yongbo; Liao, Yuting; Zeng, Dong; Bian, Zhaoying; Ma, Jianhua
2018-03-01
Computed Tomography (CT) is one of the most important medical imaging modality. CT images can be used to assist in the detection and diagnosis of lesions and to facilitate follow-up treatment. However, CT images are vulnerable to noise. Actually, there are two major source intrinsically causing the CT data noise, i.e., the X-ray photo statistics and the electronic noise background. Therefore, it is necessary to doing image quality assessment (IQA) in CT imaging before diagnosis and treatment. Most of existing CT images IQA methods are based on human observer study. However, these methods are impractical in clinical for their complex and time-consuming. In this paper, we presented a blind CT image quality assessment via deep learning strategy. A database of 1500 CT images is constructed, containing 300 high-quality images and 1200 corresponding noisy images. Specifically, the high-quality images were used to simulate the corresponding noisy images at four different doses. Then, the images are scored by the experienced radiologists by the following attributes: image noise, artifacts, edge and structure, overall image quality, and tumor size and boundary estimation with five-point scale. We trained a network for learning the non-liner map from CT images to subjective evaluation scores. Then, we load the pre-trained model to yield predicted score from the test image. To demonstrate the performance of the deep learning network in IQA, correlation coefficients: Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are utilized. And the experimental result demonstrate that the presented deep learning based IQA strategy can be used in the CT image quality assessment.
Recent progress in the development of ISO 19751
NASA Astrophysics Data System (ADS)
Farnand, Susan P.; Dalal, Edul N.; Ng, Yee S.
2006-01-01
A small number of general visual attributes have been recognized as essential in describing image quality. These include micro-uniformity, macro-uniformity, colour rendition, text and line quality, gloss, sharpness, and spatial adjacency or temporal adjacency attributes. The multiple-part International Standard discussed here was initiated by the INCITS W1 committee on the standardization of office equipment to address the need for unambiguously documented procedures and methods, which are widely applicable over the multiple printing technologies employed in office applications, for the appearance-based evaluation of these visually significant image quality attributes of printed image quality. 1,2 The resulting proposed International Standard, for which ISO/IEC WD 19751-1 3 presents an overview and an outline of the overall procedure and common methods, is based on a proposal that was predicated on the idea that image quality could be described by a small set of broad-based attributes. 4 Five ad hoc teams were established (now six since a sharpness team is in the process of being formed) to generate standards for one or more of these image quality attributes. Updates on the colour rendition, text and line quality, and gloss attributes are provided.
Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.
Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh
2016-12-01
Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.
Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques
Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh
2016-01-01
Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898
Bellesi, Luca; Wyttenbach, Rolf; Gaudino, Diego; Colleoni, Paolo; Pupillo, Francesco; Carrara, Mauro; Braghetti, Antonio; Puligheddu, Carla; Presilla, Stefano
2017-01-01
The aim of this work was to evaluate detection of low-contrast objects and image quality in computed tomography (CT) phantom images acquired at different tube loadings (i.e. mAs) and reconstructed with different algorithms, in order to find appropriate settings to reduce the dose to the patient without any image detriment. Images of supraslice low-contrast objects of a CT phantom were acquired using different mAs values. Images were reconstructed using filtered back projection (FBP), hybrid and iterative model-based methods. Image quality parameters were evaluated in terms of modulation transfer function; noise, and uniformity using two software resources. For the definition of low-contrast detectability, studies based on both human (i.e. four-alternative forced-choice test) and model observers were performed across the various images. Compared to FBP, image quality parameters were improved by using iterative reconstruction (IR) algorithms. In particular, IR model-based methods provided a 60% noise reduction and a 70% dose reduction, preserving image quality and low-contrast detectability for human radiological evaluation. According to the model observer, the diameters of the minimum detectable detail were around 2 mm (up to 100 mAs). Below 100 mAs, the model observer was unable to provide a result. IR methods improve CT protocol quality, providing a potential dose reduction while maintaining a good image detectability. Model observer can in principle be useful to assist human performance in CT low-contrast detection tasks and in dose optimisation.
Content dependent selection of image enhancement parameters for mobile displays
NASA Astrophysics Data System (ADS)
Lee, Yoon-Gyoo; Kang, Yoo-Jin; Kim, Han-Eol; Kim, Ka-Hee; Kim, Choon-Woo
2011-01-01
Mobile devices such as cellular phones and portable multimedia player with capability of playing terrestrial digital multimedia broadcasting (T-DMB) contents have been introduced into consumer market. In this paper, content dependent image quality enhancement method for sharpness and colorfulness and noise reduction is presented to improve perceived image quality on mobile displays. Human visual experiments are performed to analyze viewers' preference. Relationship between the objective measures and the optimal values of image control parameters are modeled by simple lookup tables based on the results of human visual experiments. Content dependent values of image control parameters are determined based on the calculated measures and predetermined lookup tables. Experimental results indicate that dynamic selection of image control parameters yields better image quality.
Morimoto, Linda Nayeli; Kamaya, Aya; Boulay-Coletta, Isabelle; Fleischmann, Dominik; Molvin, Lior; Tian, Lu; Fisher, George; Wang, Jia; Willmann, Jürgen K
2017-09-01
To compare image quality and lesion conspicuity of reduced dose (RD) CT with model-based iterative reconstruction (MBIR) compared to standard dose (SD) CT in patients undergoing oncological follow-up imaging. Forty-four cancer patients who had a staging SD CT within 12 months were prospectively included to undergo a weight-based RD CT with MBIR. Radiation dose was recorded and tissue attenuation and image noise of four tissue types were measured. Reproducibility of target lesion size measurements of up to 5 target lesions per patient were analyzed. Subjective image quality was evaluated for three readers independently utilizing 4- or 5-point Likert scales. Median radiation dose reduction was 46% using RD CT (P < 0.01). Median image noise across all measured tissue types was lower (P < 0.01) in RD CT. Subjective image quality for RD CT was higher (P < 0.01) in regard to image noise and overall image quality; however, there was no statistically significant difference regarding image sharpness (P = 0.59). There were subjectively more artifacts on RD CT (P < 0.01). Lesion conspicuity was subjectively better in RD CT (P < 0.01). Repeated target lesion size measurements were highly reproducible both on SD CT (ICC = 0.987) and RD CT (ICC = 0.97). RD CT imaging with MBIR provides diagnostic imaging quality and comparable lesion conspicuity on follow-up exams while allowing dose reduction by a median of 46% compared to SD CT imaging.
Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom
2018-06-01
Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.
Blind image quality assessment via probabilistic latent semantic analysis.
Yang, Xichen; Sun, Quansen; Wang, Tianshu
2016-01-01
We propose a blind image quality assessment that is highly unsupervised and training free. The new method is based on the hypothesis that the effect caused by distortion can be expressed by certain latent characteristics. Combined with probabilistic latent semantic analysis, the latent characteristics can be discovered by applying a topic model over a visual word dictionary. Four distortion-affected features are extracted to form the visual words in the dictionary: (1) the block-based local histogram; (2) the block-based local mean value; (3) the mean value of contrast within a block; (4) the variance of contrast within a block. Based on the dictionary, the latent topics in the images can be discovered. The discrepancy between the frequency of the topics in an unfamiliar image and a large number of pristine images is applied to measure the image quality. Experimental results for four open databases show that the newly proposed method correlates well with human subjective judgments of diversely distorted images.
The effect of input data transformations on object-based image analysis
LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.
2011-01-01
The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829
Eller, Achim; Wuest, Wolfgang; Scharf, Michael; Brand, Michael; Achenbach, Stephan; Uder, Michael; Lell, Michael M
2013-12-01
To evaluate an automated attenuation-based kV-selection in computed tomography of the chest in respect to radiation dose and image quality, compared to a standard 120 kV protocol. 104 patients were examined using a 128-slice scanner. Fifty examinations (58 ± 15 years, study group) were performed using the automated adaption of tube potential (100-140 kV), based on the attenuation profile of the scout scan, 54 examinations (62 ± 14 years, control group) with fixed 120 kV. Estimated CT dose index (CTDI) of the software-proposed setting was compared with a 120 kV protocol. After the scan CTDI volume (CTDIvol) and dose length product (DLP) were recorded. Image quality was assessed by region of interest (ROI) measurements, subjective image quality by two observers with a 4-point scale (3--excellent, 0--not diagnostic). The algorithm selected 100 kV in 78% and 120 kV in 22%. Overall CTDIvol reduction was 26.6% (34% in 100 kV) overall DLP reduction was 22.8% (32.1% in 100 kV) (all p<0.001). Subjective image quality was excellent in both groups. The attenuation based kV-selection algorithm enables relevant dose reduction (~27%) in chest-CT while keeping image quality parameters at high levels. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Imaging study of using radiopharmaceuticals labeled with cyclotron-produced 99mTc.
Hou, X; Tanguay, J; Vuckovic, M; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A
2016-12-07
Cyclotron-produced 99m Tc (CPTc) has been recognized as an attractive and practical substitution of reactor/generator based 99m Tc. However, the small amount of 92-98 Mo in the irradiation of enriched 100 Mo could lead to the production of other radioactive technetium isotopes (Tc-impurities) which cannot be chemically separated. Thus, these impurities could contribute to patient dose and affect image quality. The potential radiation dose caused by these Tc-impurities produced using different targets, irradiation conditions, and corresponding to different injection times have been investigated, leading us to create dose-based limits of these parameters for producing clinically acceptable CPTc. However, image quality has been not considered. The aim of the present work is to provide a comprehensive and quantitative analysis of image quality for CPTc. The impact of Tc-impurities in CPTc on image resolution, background noise, and contrast is investigated by performing both Monte-Carlo simulations and phantom experiments. Various targets, irradiation, and acquisition conditions are employed for investigating the image-based limits of CPTc production parameters. Additionally, the relationship between patient dose and image quality of CPTc samples is studied. Only those samples which meet both dose- and image-based limits should be accepted in future clinical studies.
Imaging study of using radiopharmaceuticals labeled with cyclotron-produced 99mTc
NASA Astrophysics Data System (ADS)
Hou, X.; Tanguay, J.; Vuckovic, M.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.
2016-12-01
Cyclotron-produced 99mTc (CPTc) has been recognized as an attractive and practical substitution of reactor/generator based 99mTc. However, the small amount of 92-98Mo in the irradiation of enriched 100Mo could lead to the production of other radioactive technetium isotopes (Tc-impurities) which cannot be chemically separated. Thus, these impurities could contribute to patient dose and affect image quality. The potential radiation dose caused by these Tc-impurities produced using different targets, irradiation conditions, and corresponding to different injection times have been investigated, leading us to create dose-based limits of these parameters for producing clinically acceptable CPTc. However, image quality has been not considered. The aim of the present work is to provide a comprehensive and quantitative analysis of image quality for CPTc. The impact of Tc-impurities in CPTc on image resolution, background noise, and contrast is investigated by performing both Monte-Carlo simulations and phantom experiments. Various targets, irradiation, and acquisition conditions are employed for investigating the image-based limits of CPTc production parameters. Additionally, the relationship between patient dose and image quality of CPTc samples is studied. Only those samples which meet both dose- and image-based limits should be accepted in future clinical studies.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
NASA Astrophysics Data System (ADS)
Shen, Xia; Bai, Yan-Feng; Qin, Tao; Han, Shen-Sheng
2008-11-01
Factors influencing the quality of lensless ghost imaging are investigated. According to the experimental results, we find that the imaging quality is determined by the number of independent sub light sources on the imaging plane of the reference arm. A qualitative picture based on advanced wave optics is presented to explain the physics behind the experimental phenomena. The present results will be helpful to provide a basis for improving the quality of ghost imaging systems in future works.
Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors
Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee
2012-01-01
In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
In current radiation therapy practice, image quality is still assessed subjectively or by utilizing physically-based metrics. Recently, a methodology for objective task-based image quality (IQ) assessment in radiation therapy was proposed by Barrett et al.1 In this work, we present a comprehensive implementation and evaluation of this new IQ assessment methodology. A modular simulation framework was designed to perform an automated, computer-simulated end-to-end radiation therapy treatment. A fully simulated framework was created that utilizes new learning-based stochastic object models (SOM) to obtain known organ boundaries, generates a set of images directly from the numerical phantoms created with the SOM, and automates the image segmentation and treatment planning steps of a radiation therapy work ow. By use of this computational framework, therapeutic operating characteristic (TOC) curves can be computed and the area under the TOC curve (AUTOC) can be employed as a figure-of-merit to guide optimization of different components of the treatment planning process. The developed computational framework is employed to optimize X-ray CT pre-treatment imaging. We demonstrate that use of the radiation therapy-based-based IQ measures lead to different imaging parameters than obtained by use of physical-based measures.
A CNN based neurobiology inspired approach for retinal image quality assessment.
Mahapatra, Dwarikanath; Roy, Pallab K; Sedai, Suman; Garnavi, Rahil
2016-08-01
Retinal image quality assessment (IQA) algorithms use different hand crafted features for training classifiers without considering the working of the human visual system (HVS) which plays an important role in IQA. We propose a convolutional neural network (CNN) based approach that determines image quality using the underlying principles behind the working of the HVS. CNNs provide a principled approach to feature learning and hence higher accuracy in decision making. Experimental results demonstrate the superior performance of our proposed algorithm over competing methods.
Process perspective on image quality evaluation
NASA Astrophysics Data System (ADS)
Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte
2008-01-01
The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.
“Lucky Averaging”: Quality improvement on Adaptive Optics Scanning Laser Ophthalmoscope Images
Huang, Gang; Zhong, Zhangyi; Zou, Weiyao; Burns, Stephen A.
2012-01-01
Adaptive optics(AO) has greatly improved retinal image resolution. However, even with AO, temporal and spatial variations in image quality still occur due to wavefront fluctuations, intra-frame focus shifts and other factors. As a result, aligning and averaging images can produce a mean image that has lower resolution or contrast than the best images within a sequence. To address this, we propose an image post-processing scheme called “lucky averaging”, analogous to lucky imaging (Fried, 1978) based on computing the best local contrast over time. Results from eye data demonstrate improvements in image quality. PMID:21964097
NASA Astrophysics Data System (ADS)
Shen, Qian; Bai, Yanfeng; Shi, Xiaohui; Nan, Suqin; Qu, Lijie; Li, Hengxing; Fu, Xiquan
2017-07-01
The difference in imaging quality between different ghost imaging schemes is studied by using coherent-mode representation of partially coherent fields. It is shown that the difference mainly relies on the distribution changes of the decomposition coefficients of the object imaged when the light source is fixed. For a new-designed imaging scheme, we only need to give the distribution of the decomposition coefficients and compare them with that of the existing imaging system, thus one can predict imaging quality. By choosing several typical ghost imaging systems, we theoretically and experimentally verify our results.
Oriented modulation for watermarking in direct binary search halftone images.
Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der
2012-09-01
In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.
Aurumskjöld, Marie-Louise; Söderberg, Marcus; Stålhammar, Fredrik; von Steyern, Kristina Vult; Tingberg, Anders; Ydström, Kristina
2018-06-01
Background In pediatric patients, computed tomography (CT) is important in the medical chain of diagnosing and monitoring various diseases. Because children are more radiosensitive than adults, they require minimal radiation exposure. One way to achieve this goal is to implement new technical solutions, like iterative reconstruction. Purpose To evaluate the potential of a new, iterative, model-based method for reconstructing (IMR) pediatric abdominal CT at a low radiation dose and determine whether it maintains or improves image quality, compared to the current reconstruction method. Material and Methods Forty pediatric patients underwent abdominal CT. Twenty patients were examined with the standard dose settings and 20 patients were examined with a 32% lower radiation dose. Images from the standard examination were reconstructed with a hybrid iterative reconstruction method (iDose 4 ), and images from the low-dose examinations were reconstructed with both iDose 4 and IMR. Image quality was evaluated subjectively by three observers, according to modified EU image quality criteria, and evaluated objectively based on the noise observed in liver images. Results Visual grading characteristics analyses showed no difference in image quality between the standard dose examination reconstructed with iDose 4 and the low dose examination reconstructed with IMR. IMR showed lower image noise in the liver compared to iDose 4 images. Inter- and intra-observer variance was low: the intraclass coefficient was 0.66 (95% confidence interval = 0.60-0.71) for the three observers. Conclusion IMR provided image quality equivalent or superior to the standard iDose 4 method for evaluating pediatric abdominal CT, even with a 32% dose reduction.
NASA Astrophysics Data System (ADS)
Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.
2012-04-01
While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4) The clinically acceptable lowest imaging dose level is task dependent. In our study, 72.8 mAs is a safe dose level for visualizing low-contrast objects, while 12.2 total mAs is sufficient for detecting high-contrast objects of diameter greater than 3 mm.
Wood, Tim J; Moore, Craig S; Horsfield, Carl J; Saunderson, John R; Beavis, Andrew W
2015-01-01
The purpose of this study was to develop size-based radiotherapy kilovoltage cone beam CT (CBCT) protocols for the pelvis. Image noise was measured in an elliptical phantom of varying size for a range of exposure factors. Based on a previously defined "small pelvis" reference patient and CBCT protocol, appropriate exposure factors for small, medium, large and extra-large patients were derived which approximate the image noise behaviour observed on a Philips CT scanner (Philips Medical Systems, Best, Netherlands) with automatic exposure control (AEC). Selection criteria, based on maximum tube current-time product per rotation selected during the radiotherapy treatment planning scan, were derived based on an audit of patient size. It has been demonstrated that 110 kVp yields acceptable image noise for reduced patient dose in pelvic CBCT scans of small, medium and large patients, when compared with manufacturer's default settings (125 kVp). Conversely, extra-large patients require increased exposure factors to give acceptable images. 57% of patients in the local population now receive much lower radiation doses, whereas 13% require higher doses (but now yield acceptable images). The implementation of size-based exposure protocols has significantly reduced radiation dose to the majority of patients with no negative impact on image quality. Increased doses are required on the largest patients to give adequate image quality. The development of size-based CBCT protocols that use the planning CT scan (with AEC) to determine which protocol is appropriate ensures adequate image quality whilst minimizing patient radiation dose.
Brown, Ryan; Storey, Pippa; Geppert, Christian; McGorty, KellyAnne; Leite, Ana Paula Klautau; Babb, James; Sodickson, Daniel K.; Wiggins, Graham C.; Moy, Linda
2014-01-01
Objectives To evaluate the image quality of T1-weighted fat-suppressed breast MRI at 7 T, and to compare 7-T and 3-T images. Methods Seventeen subjects were imaged using a 7-T bilateral transmit-receive coil and adiabatic inversion-based fat suppression (FS). Images were graded on a five-point scale and quantitatively assessed through signal-to-noise ratio (SNR), fibroglandular/fat contrast and signal uniformity measurements. Results Image scores at 7 T and 3 T were similar on standard-resolution images (1.1× 1.1×1.1−1.6 mm3), indicating that high-quality breast imaging with clinical parameters can be performed at 7 T. The 7-T SNR advantage was underscored on 0.6-mm isotropic images, where image quality was significantly greater than at 3 T (4.2 versus 3.1, P≤0.0001). Fibroglandular/fat contrast was more than two times higher at 7 T over 3 T, owing to effective adiabatic inversion-based FS and the inherent 7 T signal advantage. Signal uniformity was comparable at 7 T and 3 T (P<0.05). Similar 7-T image quality was observed in all subjects, indicating robustness against anatomical variation. Conclusion The 7-T bilateral transmit-receive coil and adiabatic inversion-based FS technique mitigate the impact of high-field heterogeneity to produce image quality that is as good as or better than at 3 T PMID:23896763
Image quality improvement in cone-beam CT using the super-resolution technique.
Oyama, Asuka; Kumagai, Shinobu; Arai, Norikazu; Takata, Takeshi; Saikawa, Yusuke; Shiraishi, Kenshiro; Kobayashi, Takenori; Kotoku, Jun'ichi
2018-04-05
This study was conducted to improve cone-beam computed tomography (CBCT) image quality using the super-resolution technique, a method of inferring a high-resolution image from a low-resolution image. This technique is used with two matrices, so-called dictionaries, constructed respectively from high-resolution and low-resolution image bases. For this study, a CBCT image, as a low-resolution image, is represented as a linear combination of atoms, the image bases in the low-resolution dictionary. The corresponding super-resolution image was inferred by multiplying the coefficients and the high-resolution dictionary atoms extracted from planning CT images. To evaluate the proposed method, we computed the root mean square error (RMSE) and structural similarity (SSIM). The resulting RMSE and SSIM between the super-resolution images and the planning CT images were, respectively, as much as 0.81 and 1.29 times better than those obtained without using the super-resolution technique. We used super-resolution technique to improve the CBCT image quality.
2013-01-01
Background In an ongoing study of racial/ethnic disparities in breast cancer stage at diagnosis, we consented patients to allow us to review their mammogram images, in order to examine the potential role of mammogram image quality on this disparity. Methods In a population-based study of urban breast cancer patients, a single breast imaging specialist (EC) performed a blinded review of the index mammogram that prompted diagnostic follow-up, as well as recent prior mammograms performed approximately one or two years prior to the index mammogram. Seven indicators of image quality were assessed on a five-point Likert scale, where 4 and 5 represented good and excellent quality. These included 3 technologist-associated image quality (TAIQ) indicators (positioning, compression, sharpness), and 4 machine associated image quality (MAIQ) indicators (contrast, exposure, noise and artifacts). Results are based on 494 images examined for 268 patients, including 225 prior images. Results Whereas MAIQ was generally high, TAIQ was more variable. In multivariable models of sociodemographic predictors of TAIQ, less income was associated with lower TAIQ (p < 0.05). Among prior mammograms, lower TAIQ was subsequently associated with later stage at diagnosis, even after adjusting for multiple patient and practice factors (OR = 0.80, 95% CI: 0.65, 0.99). Conclusions Considerable gains could be made in terms of increasing image quality through better positioning, compression and sharpness, gains that could impact subsequent stage at diagnosis. PMID:23621946
Optimized OFDM Transmission of Encrypted Image Over Fading Channel
NASA Astrophysics Data System (ADS)
Eldin, Salwa M. Serag
2014-11-01
This paper compares the quality of diffusion-based and permutation-based encrypted image transmission using orthogonal frequency division multiplexing (OFDM) over wireless fading channel. Sensitivity to carrier frequency offsets (CFOs) is one of the limitations in OFDM transmission that was compensated here. Different OFDM diffusions are investigated to study encrypted image transmission optimization. Peak signal-to-noise ratio between the original image and the decrypted image is used to evaluate the received image quality. Chaotic encrypted image modulated with CFOs compensated FFT-OFDM was found to give outstanding performance against other encryption and modulation techniques.
Image registration assessment in radiotherapy image guidance based on control chart monitoring.
Xia, Wenyao; Breen, Stephen L
2018-04-01
Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.
NASA Astrophysics Data System (ADS)
Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao
2015-02-01
Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less
Impact of image quality on OCT angiography based quantitative measurements.
Al-Sheikh, Mayss; Ghasemi Falavarjani, Khalil; Akil, Handan; Sadda, SriniVas R
2017-01-01
To study the impact of image quality on quantitative measurements and the frequency of segmentation error with optical coherence tomography angiography (OCTA). Seventeen eyes of 10 healthy individuals were included in this study. OCTA was performed using a swept-source device (Triton, Topcon). Each subject underwent three scanning sessions 1-2 min apart; the first two scans were obtained under standard conditions and for the third session, the image quality index was reduced using application of a topical ointment. En face OCTA images of the retinal vasculature were generated using the default segmentation for the superficial and deep retinal layer (SRL, DRL). Intraclass correlation coefficient (ICC) was used as a measure for repeatability. The frequency of segmentation error, motion artifact, banding artifact and projection artifact was also compared among the three sessions. The frequency of segmentation error, and motion artifact was statistically similar between high and low image quality sessions (P = 0.707, and P = 1 respectively). However, the frequency of projection and banding artifact was higher with a lower image quality. The vessel density in the SRL was highly repeatable in the high image quality sessions (ICC = 0.8), however, the repeatability was low, comparing the high and low image quality measurements (ICC = 0.3). In the DRL, the repeatability of the vessel density measurements was fair in the high quality sessions (ICC = 0.6 and ICC = 0.5, with and without automatic artifact removal, respectively) and poor comparing high and low image quality sessions (ICC = 0.3 and ICC = 0.06, with and without automatic artifact removal, respectively). The frequency of artifacts is higher and the repeatability of the measurements is lower with lower image quality. The impact of image quality index should be always considered in OCTA based quantitative measurements.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-01-01
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation. PMID:27649190
Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei
2016-09-15
Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.
Shaw, Leslee J; Blankstein, Ron; Jacobs, Jill E; Leipsic, Jonathon A; Kwong, Raymond Y; Taqueti, Viviany R; Beanlands, Rob S B; Mieres, Jennifer H; Flamm, Scott D; Gerber, Thomas C; Spertus, John; Di Carli, Marcelo F
2017-12-01
The aims of the current statement are to refine the definition of quality in cardiovascular imaging and to propose novel methodological approaches to inform the demonstration of quality in imaging in future clinical trials and registries. We propose defining quality in cardiovascular imaging using an analytical framework put forth by the Institute of Medicine whereby quality was defined as testing being safe, effective, patient-centered, timely, equitable, and efficient. The implications of each of these components of quality health care are as essential for cardiovascular imaging as they are for other areas within health care. Our proposed statement may serve as the foundation for integrating these quality indicators into establishing designations of quality laboratory practices and developing standards for value-based payment reform for imaging services. We also include recommendations for future clinical research to fulfill quality aims within cardiovascular imaging, including clinical hypotheses of improving patient outcomes, the importance of health status as an end point, and deferred testing options. Future research should evolve to define novel methods optimized for the role of cardiovascular imaging for detecting disease and guiding treatment and to demonstrate the role of cardiovascular imaging in facilitating healthcare quality. © 2017 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
Comprehensive model for predicting perceptual image quality of smart mobile devices.
Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng
2015-01-01
An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.
Analyser-based mammography using single-image reconstruction.
Briedis, Dahliyani; Siu, Karen K W; Paganin, David M; Pavlov, Konstantin M; Lewis, Rob A
2005-08-07
We implement an algorithm that is able to decode a single analyser-based x-ray phase-contrast image of a sample, converting it into an equivalent conventional absorption-contrast radiograph. The algorithm assumes the projection approximation for x-ray propagation in a single-material object embedded in a substrate of approximately uniform thickness. Unlike the phase-contrast images, which have both directional bias and a bias towards edges present in the sample, the reconstructed images are directly interpretable in terms of the projected absorption coefficient of the sample. The technique was applied to a Leeds TOR[MAM] phantom, which is designed to test mammogram quality by the inclusion of simulated microcalcifications, filaments and circular discs. This phantom was imaged at varying doses using three modalities: analyser-based synchrotron phase-contrast images converted to equivalent absorption radiographs using our algorithm, slot-scanned synchrotron imaging and imaging using a conventional mammography unit. Features in the resulting images were then assigned a quality score by volunteers. The single-image reconstruction method achieved higher scores at equivalent and lower doses than the conventional mammography images, but no improvement of visualization of the simulated microcalcifications, and some degradation in image quality at reduced doses for filament features.
NASA Astrophysics Data System (ADS)
Moore, Craig S.; Wood, Tim J.; Saunderson, John R.; Beavis, Andrew W.
2017-09-01
The use of computer simulated digital x-radiographs for optimisation purposes has become widespread in recent years. To make these optimisation investigations effective, it is vital simulated radiographs contain accurate anatomical and system noise. Computer algorithms that simulate radiographs based solely on the incident detector x-ray intensity (‘dose’) have been reported extensively in the literature. However, while it has been established for digital mammography that x-ray beam quality is an important factor when modelling noise in simulated images there are no such studies for diagnostic imaging of the chest, abdomen and pelvis. This study investigates the influence of beam quality on image noise in a digital radiography (DR) imaging system, and incorporates these effects into a digitally reconstructed radiograph (DRR) computer simulator. Image noise was measured on a real DR imaging system as a function of dose (absorbed energy) over a range of clinically relevant beam qualities. Simulated ‘absorbed energy’ and ‘beam quality’ DRRs were then created for each patient and tube voltage under investigation. Simulated noise images, corrected for dose and beam quality, were subsequently produced from the absorbed energy and beam quality DRRs, using the measured noise, absorbed energy and beam quality relationships. The noise images were superimposed onto the noiseless absorbed energy DRRs to create the final images. Signal-to-noise measurements in simulated chest, abdomen and spine images were within 10% of the corresponding measurements in real images. This compares favourably to our previous algorithm where images corrected for dose only were all within 20%.
Balter, James M; Antonuk, Larry E
2008-01-01
In-room radiography is not a new concept for image-guided radiation therapy. Rapid advances in technology, however, have made this positioning method convenient, and thus radiograph-based positioning has propagated widely. The paradigms for quality assurance of radiograph-based positioning include imager performance, systems integration, infrastructure, procedure documentation and testing, and support for positioning strategy implementation.
Quality assessment for color reproduction using a blind metric
NASA Astrophysics Data System (ADS)
Bringier, B.; Quintard, L.; Larabi, M.-C.
2007-01-01
This paper deals with image quality assessment. This field plays nowadays an important role in various image processing applications. Number of objective image quality metrics, that correlate or not, with the subjective quality have been developed during the last decade. Two categories of metrics can be distinguished, the first with full-reference and the second with no-reference. Full-reference metric tries to evaluate the distortion introduced to an image with regards to the reference. No-reference approach attempts to model the judgment of image quality in a blind way. Unfortunately, the universal image quality model is not on the horizon and empirical models established on psychophysical experimentation are generally used. In this paper, we focus only on the second category to evaluate the quality of color reproduction where a blind metric, based on human visual system modeling is introduced. The objective results are validated by single-media and cross-media subjective tests.
Santhi, B; Dheeptha, B
2016-01-01
The field of telemedicine has gained immense momentum, owing to the need for transmitting patients' information securely. This paper puts forth a unique method for embedding data in medical images. It is based on edge based embedding and XOR coding. The algorithm proposes a novel key generation technique by utilizing the design of a sudoku puzzle to enhance the security of the transmitted message. The edge blocks of the cover image alone, are utilized to embed the payloads. The least significant bit of the pixel values are changed by XOR coding depending on the data to be embedded and the key generated. Hence the distortion in the stego image is minimized and the information is retrieved accurately. Data is embedded in the RGB planes of the cover image, thus increasing its embedding capacity. Several measures including peak signal noise ratio (PSNR), mean square error (MSE), universal image quality index (UIQI) and correlation coefficient (R) are the image quality measures that have been used to analyze the quality of the stego image. It is evident from the results that the proposed technique outperforms the former methodologies.
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy
NASA Astrophysics Data System (ADS)
Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-01
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Moore, Craig S; Horsfield, Carl J; Saunderson, John R; Beavis, Andrew W
2015-01-01
Objective: The purpose of this study was to develop size-based radiotherapy kilovoltage cone beam CT (CBCT) protocols for the pelvis. Methods: Image noise was measured in an elliptical phantom of varying size for a range of exposure factors. Based on a previously defined “small pelvis” reference patient and CBCT protocol, appropriate exposure factors for small, medium, large and extra-large patients were derived which approximate the image noise behaviour observed on a Philips CT scanner (Philips Medical Systems, Best, Netherlands) with automatic exposure control (AEC). Selection criteria, based on maximum tube current–time product per rotation selected during the radiotherapy treatment planning scan, were derived based on an audit of patient size. Results: It has been demonstrated that 110 kVp yields acceptable image noise for reduced patient dose in pelvic CBCT scans of small, medium and large patients, when compared with manufacturer's default settings (125 kVp). Conversely, extra-large patients require increased exposure factors to give acceptable images. 57% of patients in the local population now receive much lower radiation doses, whereas 13% require higher doses (but now yield acceptable images). Conclusion: The implementation of size-based exposure protocols has significantly reduced radiation dose to the majority of patients with no negative impact on image quality. Increased doses are required on the largest patients to give adequate image quality. Advances in knowledge: The development of size-based CBCT protocols that use the planning CT scan (with AEC) to determine which protocol is appropriate ensures adequate image quality whilst minimizing patient radiation dose. PMID:26419892
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.
Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-07
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Chuan, E-mail: chuan.huang@stonybrookmedicine.edu; Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115; Departments of Radiology, Psychiatry, Stony Brook Medicine, Stony Brook, New York 11794
2015-02-15
Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PETmore » using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide relatively accurate motion fields and yield tMR-based motion corrected PET images with similar image quality as those reconstructed using fully sampled tMR data. The reduction of tMR acquisition time makes it more compatible with routine clinical cardiac PET-MR studies.« less
Asbach, Patrick; Hein, Patrick A; Stemmer, Alto; Wagner, Moritz; Huppertz, Alexander; Hamm, Bernd; Taupitz, Matthias; Klessen, Christian
2008-01-01
To evaluate soft tissue contrast and image quality of a respiratory-triggered echo-planar imaging based diffusion-weighted sequence (EPI-DWI) with different b values for magnetic resonance imaging (MRI) of the liver. Forty patients were examined. Quantitative and qualitative evaluation of contrast was performed. Severity of artifacts and overall image quality in comparison with a T2w turbo spin-echo (T2-TSE) sequence were scored. The liver-spleen contrast was significantly higher (P < 0.05) for the EPI-DWI compared with the T2-TSE sequence (0.47 +/- 0.11 (b50); 0.48 +/- 0.13 (b300); 0.47 +/- 0.13 (b600) vs 0.38 +/- 0.11). Liver-lesion contrast strongly depends on the b value of the DWI sequence and decreased with higher b values (b50, 0.47 +/- 0.19; b300, 0.40 +/- 0.20; b600, 0.28 +/- 0.23). Severity of artifacts and overall image quality were comparable to the T2-TSE sequence when using a low b value (P > 0.05), artifacts increased and image quality decreased with higher b values (P < 0.05). Respiratory-triggered EPI-DWI of the liver is feasible because good image quality and favorable soft tissue contrast can be achieved.
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun
2015-04-01
Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.
NASA Astrophysics Data System (ADS)
Wang, Weibao; Overall, Gary; Riggs, Travis; Silveston-Keith, Rebecca; Whitney, Julie; Chiu, George; Allebach, Jan P.
2013-01-01
Assessment of macro-uniformity is a capability that is important for the development and manufacture of printer products. Our goal is to develop a metric that will predict macro-uniformity, as judged by human subjects, by scanning and analyzing printed pages. We consider two different machine learning frameworks for the metric: linear regression and the support vector machine. We have implemented the image quality ruler, based on the recommendations of the INCITS W1.1 macro-uniformity team. Using 12 subjects at Purdue University and 20 subjects at Lexmark, evenly balanced with respect to gender, we conducted subjective evaluations with a set of 35 uniform b/w prints from seven different printers with five levels of tint coverage. Our results suggest that the image quality ruler method provides a reliable means to assess macro-uniformity. We then defined and implemented separate features to measure graininess, mottle, large area variation, jitter, and large-scale non-uniformity. The algorithms that we used are largely based on ISO image quality standards. Finally, we used these features computed for a set of test pages and the subjects' image quality ruler assessments of these pages to train the two different predictors - one based on linear regression and the other based on the support vector machine (SVM). Using five-fold cross-validation, we confirmed the efficacy of our predictor.
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Du, Weiqi; Zhang, Gaofei; Ye, Liangchen
2016-01-01
Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions. PMID:27187390
Du, Weiqi; Zhang, Gaofei; Ye, Liangchen
2016-05-11
Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions.
The Importance of Quality in Ventilation-Perfusion Imaging.
Mann, April; DiDea, Mario; Fournier, France; Tempesta, Daniel; Williams, Jessica; LaFrance, Norman
2018-06-01
As the health care environment continues to change and morph into a system focusing on increased quality and evidence-based outcomes, nuclear medicine technologists must be reminded that they play a critical role in achieving high-quality, interpretable images used to drive patient care, treatment, and best possible outcomes. A survey performed by the Quality Committee of the Society of Nuclear Medicine and Molecular Imaging Technologist Section demonstrated that a clear knowledge gap exists among technologists regarding their understanding of quality, how it is measured, and how it should be achieved by all practicing technologists regardless of role and education level. Understanding of these areas within health care, in conjunction with the growing emphasis on evidence-based outcomes, quality measures, and patient satisfaction, will ultimately elevate the role of nuclear medicine technologists today and into the future. The nuclear medicine role now requires technologists to demonstrate patient assessment skills, practice safety procedures with regard to staff and patients, provide patient education and instruction, and provide physicians with information to assist with the interpretation and outcome of the study. In addition, the technologist must be able to evaluate images by performing technical analysis, knowing the demonstrated anatomy and pathophysiology, and assessing overall quality. Technologists must also be able to triage and understand the disease processes being evaluated and how nuclear medicine diagnostic studies may drive care and treatment. Therefore, it is imperative that nuclear medicine technologists understand their role in the achievement of a high-quality, interpretable study by applying quality principles and understanding and using imaging techniques beyond just basic protocols for every type of disease or system being imaged. This article focuses on quality considerations related to ventilation-perfusion imaging. It provides insight on appropriate imaging techniques and protocols, true imaging variants and tracer distributions versus artifacts that may result in a lower-quality or misinterpreted study, and the use of SPECT and SPECT/CT as an alternative providing a high-quality, interpretable study with better diagnostic accuracy and fewer nondiagnostic procedures than historical planar imaging. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.
Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis
2014-04-01
Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Single-channel stereoscopic ophthalmology microscope based on TRD
NASA Astrophysics Data System (ADS)
Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo
2016-03-01
A stereoscopic imaging modality was developed for the application of ophthalmology surgical microscopes. A previous study has already introduced a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (SSVIM-TRD), in which two different view angles, image disparity, are generated by imaging through a transparent rotating deflector (TRD) mounted on a stepping motor and is placed in a lens system. In this case, the image disparity is a function of the refractive index and the rotation angle of TRD. Real-time single-channel stereoscopic ophthalmology microscope (SSOM) based on the TRD is improved by real-time controlling and programming, imaging speed, and illumination method. Image quality assessments were performed to investigate images quality and stability during the TRD operation. Results presented little significant difference in image quality in terms of stability of structural similarity (SSIM). A subjective analysis was performed with 15 blinded observers to evaluate the depth perception improvement and presented significant improvement in the depth perception capability. Along with all evaluation results, preliminary results of rabbit eye imaging presented that the SSOM could be utilized as an ophthalmic operating microscopes to overcome some of the limitations of conventional ones.
Image quality assessment for CT used on small animals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cisneros, Isabela Paredes, E-mail: iparedesc@unal.edu.co; Agulles-Pedrós, Luis, E-mail: lagullesp@unal.edu.co
Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters usingmore » an acrylic phantom and then, using the computational tool MATLAB, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.« less
Image quality assessment for CT used on small animals
NASA Astrophysics Data System (ADS)
Cisneros, Isabela Paredes; Agulles-Pedrós, Luis
2016-07-01
Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters using an acrylic phantom and then, using the computational tool MatLab, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.
NASA Astrophysics Data System (ADS)
Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali
2017-12-01
Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
Benefits of utilizing CellProfiler as a characterization tool for U–10Mo nuclear fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.; Douglas, J.; Patterson, L.
2015-07-15
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium–molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries. - Graphical abstract: Display Omitted - Highlights: • A technique is developed to score U–10Mo FIB-SEM image quality using CellProfiler. • The pass/fail metric is based on image illumination, focus, and area scratched. • Automated image analysis is performed in pipeline fashion to characterize images. • Fission gas void, interaction layer, and grain boundary coverage data is extracted. • Preliminary characterization results demonstrate consistency of the algorithm.« less
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
Imaging quality evaluation method of pixel coupled electro-optical imaging system
NASA Astrophysics Data System (ADS)
He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui
2017-09-01
With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.
NASA Astrophysics Data System (ADS)
Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang
2014-11-01
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
Hultenmo, Maria; Caisander, Håkan; Mack, Karsten; Thilander-Klang, Anne
2016-06-01
The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR™) and model-based IR (Veo™)-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft™ convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2016-12-01
To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, C; Zhang, H; Chen, Y
Purpose: Recently, compressed sensing (CS) based iterative reconstruction (IR) method is receiving attentions to reconstruct high quality cone beam computed tomography (CBCT) images using sparsely sampled or noisy projections. The aim of this study is to develop a novel baseline algorithm called Mask Guided Image Reconstruction (MGIR), which can provide superior image quality for both low-dose 3DCBCT and 4DCBCT under single mathematical framework. Methods: In MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions where anatomical structures are 1) within the priori-defined mask and 2) outside the mask. Then we update each part of imagesmore » alternatively thorough solving minimization problems based on CS type IR. For low-dose 3DCBCT, the former region is defined as the anatomically complex region where it is focused to preserve edge information while latter region is defined as contrast uniform, and hence aggressively updated to remove noise/artifact. In 4DCBCT, the regions are separated as the common static part and moving part. Then, static volume and moving volumes were updated with global and phase sorted projection respectively, to optimize the image quality of both moving and static part simultaneously. Results: Examination of MGIR algorithm showed that high quality of both low-dose 3DCBCT and 4DCBCT images can be reconstructed without compromising the image resolution and imaging dose or scanning time respectively. For low-dose 3DCBCT, a clinical viable and high resolution head-and-neck image can be obtained while cutting the dose by 83%. In 4DCBCT, excellent quality 4DCBCT images could be reconstructed while requiring no more projection data and imaging dose than a typical clinical 3DCBCT scan. Conclusion: The results shown that the image quality of MGIR was superior compared to other published CS based IR algorithms for both 4DCBCT and low-dose 3DCBCT. This makes our MGIR algorithm potentially useful in various on-line clinical applications. Provisional Patent: UF#15476; WGS Ref. No. U1198.70067US00.« less
Hsu, Bing-Cheng
2018-01-01
Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme. PMID:29757940
Lin, Chi-Ying; Hsu, Bing-Cheng
2018-05-14
Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.
Mogol, Burçe Ataç; Gökmen, Vural
2014-05-01
Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.
Warped document image correction method based on heterogeneous registration strategies
NASA Astrophysics Data System (ADS)
Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan
2013-03-01
With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-03-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.
Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction
NASA Astrophysics Data System (ADS)
Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.
2017-10-01
One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.
Evaluation of imaging quality for flat-panel detector based low dose C-arm CT system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Sungchae
The image quality associated with the extent of the angle of gantry rotation, the number of projection views, and the dose of X-ray radiation was investigated in flat-panel detector (FPD) based C-arm cone-beam computed tomography (CBCT) system for medical applications. A prototype CBCT system for the projection acquisition used the X-ray tube (A-132, Varian inc.) having rhenium-tungsten molybdenum target and flat panel a-Si X-ray detector (PaxScan 4030CB, Varian inc.) having a 397 x 298 mm active area with 388 μm pixel pitch and 1024 x 768 pixels in 2 by 2 binning mode. The performance comparison of X-ray imaging qualitymore » was carried out using the Feldkamp, Davis, and Kress (FDK) reconstruction algorithm between different conditions of projection acquisition. In this work, head-and-dental (75 kVp/20 mA) and chest (90 kVp/25 mA) phantoms were used to evaluate the image quality. The 361 (30 fps x 12 s) projection data during 360 deg. gantry rotation with 1 deg. interval for the 3D reconstruction were acquired. Parke weighting function were applied to handle redundant data and improve the reconstructed image quality in a mobile C-arm system with limited rotation angles. The reconstructed 3D images were investigated for comparison of qualitative image quality in terms of scan protocols (projection views, rotation angles and exposure dose). Furthermore, the performance evaluation in image quality will be investigated regarding X-ray dose and limited projection data for a FPD based mobile C-arm CBCT system. (authors)« less
Son, Jung-Young; Saveljev, Vladmir V; Kim, Jae-Soon; Kim, Sung-Sik; Javidi, Bahram
2004-09-10
The viewing zone of autostereoscopic imaging systems that use lenticular, parallax-barrier, and microlens-array plates as the viewing-zone-forming optics is analyzed in order to verify the image-quality differences between different locations of the zone. The viewing zone consists of many subzones. The images seen at most of these subzones are composed of at least one image strip selected from the total number of different view images displayed. These different view images are not mixed but patched to form a complete image. This image patching deteriorates the quality of the image seen at different subzones. We attempt to quantify the quality of the image seen at these viewing subzones by taking the inverse of the number of different view images patched together at different subzones. Although the combined viewing zone can be extended to almost all of the front space of the imaging system, in reality it is limited mainly by the image quality.
Assessment of visual landscape quality using IKONOS imagery.
Ozkan, Ulas Yunus
2014-07-01
The assessment of visual landscape quality is of importance to the management of urban woodlands. Satellite remote sensing may be used for this purpose as a substitute for traditional survey techniques that are both labour-intensive and time-consuming. This study examines the association between the quality of the perceived visual landscape in urban woodlands and texture measures extracted from IKONOS satellite data, which features 4-m spatial resolution and four spectral bands. The study was conducted in the woodlands of Istanbul (the most important element of urban mosaic) lying along both shores of the Bosporus Strait. The visual quality assessment applied in this study is based on the perceptual approach and was performed via a survey of expressed preferences. For this purpose, representative photographs of real scenery were used to elicit observers' preferences. A slide show comprising 33 images was presented to a group of 153 volunteers (all undergraduate students), and they were asked to rate the visual quality of each on a 10-point scale (1 for very low visual quality, 10 for very high). Average visual quality scores were calculated for landscape. Texture measures were acquired using the two methods: pixel-based and object-based. Pixel-based texture measures were extracted from the first principle component (PC1) image. Object-based texture measures were extracted by using the original four bands. The association between image texture measures and perceived visual landscape quality was tested via Pearson's correlation coefficient. The analysis found a strong linear association between image texture measures and visual quality. The highest correlation coefficient was calculated between standard deviation of gray levels (SDGL) (one of the pixel-based texture measures) and visual quality (r = 0.82, P < 0.05). The results showed that perceived visual quality of urban woodland landscapes can be estimated by using texture measures extracted from satellite data in combination with appropriate modelling techniques.
Park, Hyun Jeong; Lee, Jeong Min; Park, Sung Bin; Lee, Jong Beum; Jeong, Yoong Ki; Yoon, Jeong Hee
The purpose of this work was to evaluate the image quality, lesion conspicuity, and dose reduction provided by knowledge-based iterative model reconstruction (IMR) in computed tomography (CT) of the liver compared with hybrid iterative reconstruction (IR) and filtered back projection (FBP) in patients with hepatocellular carcinoma (HCC). Fifty-six patients with 61 HCCs who underwent multiphasic reduced-dose CT (RDCT; n = 33) or standard-dose CT (SDCT; n = 28) were retrospectively evaluated. Reconstructed images with FBP, hybrid IR (iDose), IMR were evaluated for image quality using CT attenuation and image noise. Objective and subjective image quality of RDCT and SDCT sets were independently assessed by 2 observers in a blinded manner. Image quality and lesion conspicuity were better with IMR for both RDCT and SDCT than either FBP or IR (P < 0.001). Contrast-to-noise ratio of HCCs in IMR-RDCT was significantly higher on delayed phase (DP) (P < 0.001), and comparable on arterial phase, than with IR-SDCT (P = 0.501). Iterative model reconstruction RDCT was significantly superior to FBP-SDCT (P < 0.001). Compared with IR-SDCT, IMR-RDCT was comparable in image sharpness and tumor conspicuity on arterial phase, and superior in image quality, noise, and lesion conspicuity on DP. With the use of IMR, a 27% reduction of effective dose was achieved with RDCT (12.7 ± 0.6 mSv) compared with SDCT (17.4 ± 1.1 mSv) without loss of image quality (P < 0.001). Iterative model reconstruction provides better image quality and tumor conspicuity than FBP and IR with considerable noise reduction. In addition, more than comparable results were achieved with IMR-RDCT to IR-SDCT for the evaluation of HCCs.
Generalized watermarking attack based on watermark estimation and perceptual remodulation
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Pereira, Shelby; Herrigel, Alexander; Baumgartner, Nazanin; Pun, Thierry
2000-05-01
Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: (1) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; (2) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.
Progressive cone beam CT dose control in image-guided radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan Hao; Cervino, Laura; Jiang, Steve B.
2013-06-15
Purpose: Cone beam CT (CBCT) in image-guided radiotherapy (IGRT) offers a tremendous advantage for treatment guidance. The associated imaging dose is a clinical concern. One unique feature of CBCT-based IGRT is that the same patient is repeatedly scanned during a treatment course, and the contents of CBCT images at different fractions are similar. The authors propose a progressive dose control (PDC) scheme to utilize this temporal correlation for imaging dose reduction. Methods: A dynamic CBCT scan protocol, as opposed to the static one in the current clinical practice, is proposed to gradually reduce the imaging dose in each treatment fraction.more » The CBCT image from each fraction is processed by a prior-image based nonlocal means (PINLM) module to enhance its quality. The increasing amount of prior information from previous CBCT images prevents degradation of image quality due to the reduced imaging dose. Two proof-of-principle experiments have been conducted using measured phantom data and Monte Carlo simulated patient data with deformation. Results: In the measured phantom case, utilizing a prior image acquired at 0.4 mAs, PINLM is able to improve the image quality of a CBCT acquired at 0.2 mAs by reducing the noise level from 34.95 to 12.45 HU. In the synthetic patient case, acceptable image quality is maintained at four consecutive fractions with gradually decreasing exposure levels of 0.4, 0.1, 0.07, and 0.05 mAs. When compared with the standard low-dose protocol of 0.4 mAs for each fraction, an overall imaging dose reduction of more than 60% is achieved. Conclusions: PINLM-PDC is able to reduce CBCT imaging dose in IGRT utilizing the temporal correlations among the sequence of CBCT images while maintaining the quality.« less
Multi-view 3D echocardiography compounding based on feature consistency
NASA Astrophysics Data System (ADS)
Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.
2011-09-01
Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.
Assessing the quality of restored images in optical long-baseline interferometry
NASA Astrophysics Data System (ADS)
Gomes, Nuno; Garcia, Paulo J. V.; Thiébaut, Éric
2017-03-01
Assessing the quality of aperture synthesis maps is relevant for benchmarking image reconstruction algorithms, for the scientific exploitation of data from optical long-baseline interferometers, and for the design/upgrade of new/existing interferometric imaging facilities. Although metrics have been proposed in these contexts, no systematic study has been conducted on the selection of a robust metric for quality assessment. This article addresses the question: what is the best metric to assess the quality of a reconstructed image? It starts by considering several metrics and selecting a few based on general properties. Then, a variety of image reconstruction cases are considered. The observational scenarios are phase closure and phase referencing at the Very Large Telescope Interferometer (VLTI), for a combination of two, three, four and six telescopes. End-to-end image reconstruction is accomplished with the MIRA software, and several merit functions are put to test. It is found that convolution by an effective point spread function is required for proper image quality assessment. The effective angular resolution of the images is superior to naive expectation based on the maximum frequency sampled by the array. This is due to the prior information used in the aperture synthesis algorithm and to the nature of the objects considered. The ℓ1-norm is the most robust of all considered metrics, because being linear it is less sensitive to image smoothing by high regularization levels. For the cases considered, this metric allows the implementation of automatic quality assessment of reconstructed images, with a performance similar to human selection.
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
High-quality compressive ghost imaging
NASA Astrophysics Data System (ADS)
Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun
2018-04-01
We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.
Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.
Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida
2016-06-28
During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.
Learning the manifold of quality ultrasound acquisition.
El-Zehiry, Noha; Yan, Michelle; Good, Sara; Fang, Tong; Zhou, S Kevin; Grady, Leo
2013-01-01
Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.
Image sharpness assessment based on wavelet energy of edge area
NASA Astrophysics Data System (ADS)
Li, Jin; Zhang, Hong; Zhang, Lei; Yang, Yifan; He, Lei; Sun, Mingui
2018-04-01
Image quality assessment is needed in multiple image processing areas and blur is one of the key reasons of image deterioration. Although great full-reference image quality assessment metrics have been proposed in the past few years, no-reference method is still an area of current research. Facing this problem, this paper proposes a no-reference sharpness assessment method based on wavelet transformation which focuses on the edge area of image. Based on two simple characteristics of human vision system, weights are introduced to calculate weighted log-energy of each wavelet sub band. The final score is given by the ratio of high-frequency energy to the total energy. The algorithm is tested on multiple databases. Comparing with several state-of-the-art metrics, proposed algorithm has better performance and less runtime consumption.
Medical image enhancement using resolution synthesis
NASA Astrophysics Data System (ADS)
Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.
2011-03-01
We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
Rodríguez-Olivares, Ramón; El Faquir, Nahid; Rahhab, Zouhair; Maugenest, Anne-Marie; Van Mieghem, Nicolas M; Schultz, Carl; Lauritsch, Guenter; de Jaegere, Peter P T
2016-07-01
To study the determinants of image quality of rotational angiography using dedicated research prototype software for motion compensation without rapid ventricular pacing after the implantation of four commercially available catheter-based valves. Prospective observational study including 179 consecutive patients who underwent transcatheter aortic valve implantation (TAVI) with either the Medtronic CoreValve (MCS), Edward-SAPIEN Valve (ESV), Boston Sadra Lotus (BSL) or Saint-Jude Portico Valve (SJP) in whom rotational angiography (R-angio) with motion compensation 3D image reconstruction was performed. Image quality was evaluated from grade 1 (excellent image quality) to grade 5 (strongly degraded). Distinction was made between good (grades 1, 2) and poor image quality (grades 3-5). Clinical (gender, body mass index, Agatston score, heart rate and rhythm, artifacts), procedural (valve type) and technical variables (isocentricity) were related with the image quality assessment. Image quality was good in 128 (72 %) and poor in 51 (28 %) patients. By univariable analysis only valve type (BSL) and the presence of an artefact negatively affected image quality. By multivariate analysis (in which BMI was forced into the model) BSL valve (Odds 3.5, 95 % CI [1.3-9.6], p = 0.02), presence of an artifact (Odds 2.5, 95 % CI [1.2-5.4], p = 0.02) and BMI (Odds 1.1, 95 % CI [1.0-1.2], p = 0.04) were independent predictors of poor image quality. Rotational angiography with motion compensation 3D image reconstruction using a dedicated research prototype software offers good image quality for the evaluation of frame geometry after TAVI in the majority of patients. Valve type, presence of artifacts and higher BMI negatively affect image quality.
Benefits of utilizing CellProfiler as a characterization tool for U-10Mo nuclear fuel
Collette, R.; Douglas, J.; Patterson, L.; ...
2015-05-01
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium-molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries.« less
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
Machine vision based quality inspection of flat glass products
NASA Astrophysics Data System (ADS)
Zauner, G.; Schagerl, M.
2014-03-01
This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.
Xie, Shan Juan; Lu, Yu; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2015-01-01
Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy. PMID:26184226
Xie, Shan Juan; Lu, Yu; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2015-07-14
Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy.
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Fuzzy Logic-based expert system for evaluating cake quality of freeze-dried formulations.
Trnka, Hjalte; Wu, Jian X; Van De Weert, Marco; Grohganz, Holger; Rantanen, Jukka
2013-12-01
Freeze-drying of peptide and protein-based pharmaceuticals is an increasingly important field of research. The diverse nature of these compounds, limited understanding of excipient functionality, and difficult-to-analyze quality attributes together with the increasing importance of the biosimilarity concept complicate the development phase of safe and cost-effective drug products. To streamline the development phase and to make high-throughput formulation screening possible, efficient solutions for analyzing critical quality attributes such as cake quality with minimal material consumption are needed. The aim of this study was to develop a fuzzy logic system based on image analysis (IA) for analyzing cake quality. Freeze-dried samples with different visual quality attributes were prepared in well plates. Imaging solutions together with image analytical routines were developed for extracting critical visual features such as the degree of cake collapse, glassiness, and color uniformity. On the basis of the IA outputs, a fuzzy logic system for analysis of these freeze-dried cakes was constructed. After this development phase, the system was tested with a new screening well plate. The developed fuzzy logic-based system was found to give comparable quality scores with visual evaluation, making high-throughput classification of cake quality possible. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Husarik, Daniela B; Marin, Daniele; Samei, Ehsan; Richard, Samuel; Chen, Baiyu; Jaffe, Tracy A; Bashir, Mustafa R; Nelson, Rendon C
2012-08-01
The aim of this study was to compare the image quality of abdominal computed tomography scans in an anthropomorphic phantom acquired at different radiation dose levels where each raw data set is reconstructed with both a standard convolution filtered back projection (FBP) and a full model-based iterative reconstruction (MBIR) algorithm. An anthropomorphic phantom in 3 sizes was used with a custom-built liver insert simulating late hepatic arterial enhancement and containing hypervascular liver lesions of various sizes. Imaging was performed on a 64-section multidetector-row computed tomography scanner (Discovery CT750 HD; GE Healthcare, Waukesha, WI) at 3 different tube voltages for each patient size and 5 incrementally decreasing tube current-time products for each tube voltage. Quantitative analysis consisted of contrast-to-noise ratio calculations and image noise assessment. Qualitative image analysis was performed by 3 independent radiologists rating subjective image quality and lesion conspicuity. Contrast-to-noise ratio was significantly higher and mean image noise was significantly lower on MBIR images than on FBP images in all patient sizes, at all tube voltage settings, and all radiation dose levels (P < 0.05). Overall image quality and lesion conspicuity were rated higher for MBIR images compared with FBP images at all radiation dose levels. Image quality and lesion conspicuity on 25% to 50% dose MBIR images were rated equal to full-dose FBP images. This phantom study suggests that depending on patient size, clinically acceptable image quality of the liver in the late hepatic arterial phase can be achieved with MBIR at approximately 50% lower radiation dose compared with FBP.
An Underwater Color Image Quality Evaluation Metric.
Yang, Miao; Sowmya, Arcot
2015-12-01
Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.
Learning to rank for blind image quality assessment.
Gao, Fei; Tao, Dacheng; Gao, Xinbo; Li, Xuelong
2015-10-01
Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories.
Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.
Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong
2016-04-01
Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.
An Approach to Improve the Quality of Infrared Images of Vein-Patterns
Lin, Chih-Lung
2011-01-01
This study develops an approach to improve the quality of infrared (IR) images of vein-patterns, which usually have noise, low contrast, low brightness and small objects of interest, thus requiring preprocessing to improve their quality. The main characteristics of the proposed approach are that no prior knowledge about the IR image is necessary and no parameters must be preset. Two main goals are sought: impulse noise reduction and adaptive contrast enhancement technologies. In our study, a fast median-based filter (FMBF) is developed as a noise reduction method. It is based on an IR imaging mechanism to detect the noisy pixels and on a modified median-based filter to remove the noisy pixels in IR images. FMBF has the advantage of a low computation load. In addition, FMBF can retain reasonably good edges and texture information when the size of the filter window increases. The most important advantage is that the peak signal-to-noise ratio (PSNR) caused by FMBF is higher than the PSNR caused by the median filter. A hybrid cumulative histogram equalization (HCHE) is proposed for adaptive contrast enhancement. HCHE can automatically generate a hybrid cumulative histogram (HCH) based on two different pieces of information about the image histogram. HCHE can improve the enhancement effect on hot objects rather than background. The experimental results are addressed and demonstrate that the proposed approach is feasible for use as an effective and adaptive process for enhancing the quality of IR vein-pattern images. PMID:22247674
An approach to improve the quality of infrared images of vein-patterns.
Lin, Chih-Lung
2011-01-01
This study develops an approach to improve the quality of infrared (IR) images of vein-patterns, which usually have noise, low contrast, low brightness and small objects of interest, thus requiring preprocessing to improve their quality. The main characteristics of the proposed approach are that no prior knowledge about the IR image is necessary and no parameters must be preset. Two main goals are sought: impulse noise reduction and adaptive contrast enhancement technologies. In our study, a fast median-based filter (FMBF) is developed as a noise reduction method. It is based on an IR imaging mechanism to detect the noisy pixels and on a modified median-based filter to remove the noisy pixels in IR images. FMBF has the advantage of a low computation load. In addition, FMBF can retain reasonably good edges and texture information when the size of the filter window increases. The most important advantage is that the peak signal-to-noise ratio (PSNR) caused by FMBF is higher than the PSNR caused by the median filter. A hybrid cumulative histogram equalization (HCHE) is proposed for adaptive contrast enhancement. HCHE can automatically generate a hybrid cumulative histogram (HCH) based on two different pieces of information about the image histogram. HCHE can improve the enhancement effect on hot objects rather than background. The experimental results are addressed and demonstrate that the proposed approach is feasible for use as an effective and adaptive process for enhancing the quality of IR vein-pattern images.
Kholmovski, Eugene G; Parker, Dennis L
2005-07-01
There is a considerable similarity between proton density-weighted (PDw) and T2-weighted (T2w) images acquired by dual echo fast spin-echo (FSE) sequences. The similarity manifests itself not only in image space as correspondence between intensities of PDw and T2w images, but also in phase space as consistency between phases of PDw and T2w images. Methods for improving the imaging efficiency and image quality of dual echo FSE sequences based on this feature have been developed. The total scan time of dual echo FSE acquisition may be reduced by as much as 25% by incorporating an estimate of the image phase from a fully sampled PDw image when reconstructing partially sampled T2w images. The quality of T2w images acquired using phased array coils may be significantly improved by using the developed noise reduction reconstruction scheme, which is based on the correspondence between the PDw and T2w image intensities and the consistency between the PDw and T2w image phases. Studies of phantom and human subject MRI data were performed to evaluate the effectiveness of the techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smitherman, C; Chen, B; Samei, E
2014-06-15
Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less
Tan, T J; Lau, Kenneth K; Jackson, Dana; Ardley, Nicholas; Borasu, Adina
2017-04-01
The purpose of this study was to assess the efficacy of model-based iterative reconstruction (MBIR), statistical iterative reconstruction (SIR), and filtered back projection (FBP) image reconstruction algorithms in the delineation of ureters and overall image quality on non-enhanced computed tomography of the renal tracts (NECT-KUB). This was a prospective study of 40 adult patients who underwent NECT-KUB for investigation of ureteric colic. Images were reconstructed using FBP, SIR, and MBIR techniques and individually and randomly assessed by two blinded radiologists. Parameters measured were overall image quality, presence of ureteric calculus, presence of hydronephrosis or hydroureters, image quality of each ureteric segment, total length of ureters unable to be visualized, attenuation values of image noise, and retroperitoneal fat content for each patient. There were no diagnostic discrepancies between image reconstruction modalities for urolithiasis. Overall image qualities and for each ureteric segment were superior using MBIR (67.5 % rated as 'Good to Excellent' vs. 25 % in SIR and 2.5 % in FBP). The lengths of non-visualized ureteric segments were shortest using MBIR (55.0 % measured 'less than 5 cm' vs. ASIR 33.8 % and FBP 10 %). MBIR was able to reduce overall image noise by up to 49.36 % over SIR and 71.02 % over FBP. MBIR technique improves overall image quality and visualization of ureters over FBP and SIR.
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
NASA Astrophysics Data System (ADS)
Zhou, Yi; Li, Qi
2017-01-01
A dual-axis reflective continuous-wave terahertz (THz) confocal scanning polarization imaging system was adopted. THz polarization imaging experiments on gaps on film and metallic letters "BeLLE" were carried out. Imaging results indicate that the THz polarization imaging is sensitive to the tilted gap or wide flat gap, suggesting the THz polarization imaging is able to detect edges and stains. An image fusion method based on the digital image processing was proposed to ameliorate the imaging quality of metallic letters "BeLLE." Objective and subjective evaluation both prove that this method can improve the imaging quality.
Information retrieval based on single-pixel optical imaging with quick-response code
NASA Astrophysics Data System (ADS)
Xiao, Yin; Chen, Wen
2018-04-01
Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.
Evaluation of portable CT scanners for otologic image-guided surgery
Balachandran, Ramya; Schurzig, Daniel; Fitzpatrick, J Michael; Labadie, Robert F
2011-01-01
Purpose Portable CT scanners are beneficial for diagnosis in the intensive care unit, emergency room, and operating room. Portable fixed-base versus translating-base CT systems were evaluated for otologic image-guided surgical (IGS) applications based on geometric accuracy and utility for percutaneous cochlear implantation. Methods Five cadaveric skulls were fitted with fiducial markers and scanned using both a translating-base, 8-slice CT scanner (CereTom®) and a fixed-base, flat-panel, volume-CT (fpVCT) scanner (Xoran xCAT®). Images were analyzed for: (a) subjective quality (i.e. noise), (b) consistency of attenuation measurements (Hounsfield units) across similar tissue, and (c) geometric accuracy of fiducial marker positions. The utility of these scanners in clinical IGS cases was tested. Results Five cadaveric specimens were scanned using each of the scanners. The translating-base, 8-slice CT scanner had spatially consistent Hounsfield units, and the image quality was subjectively good. However, because of movement variations during scanning, the geometric accuracy of fiducial marker positions was low. The fixed-base, fpVCT system had high spatial resolution, but the images were noisy and had spatially inconsistent attenuation measurements; while the geometric representation of the fiducial markers was highly accurate. Conclusion Two types of portable CT scanners were evaluated for otologic IGS. The translating-base, 8-slice CT scanner provided better image quality than a fixed-base, fpVCT scanner. However, the inherent error in three-dimensional spatial relationships by the translating-based system makes it suboptimal for otologic IGS use. PMID:21779768
Optical classification for quality and defect analysis of train brakes
NASA Astrophysics Data System (ADS)
Glock, Stefan; Hausmann, Stefan; Gerke, Sebastian; Warok, Alexander; Spiess, Peter; Witte, Stefan; Lohweg, Volker
2009-06-01
In this paper we present an optical measurement system approach for quality analysis of brakes which are used in high-speed trains. The brakes consist of the so called brake discs and pads. In a deceleration process the discs will be heated up to 500°C. The quality measure is based on the fact that the heated brake discs should not generate hot spots inside the brake material. Instead, the brake disc should be heated homogeneously by the deceleration. Therefore, it makes sense to analyze the number of hot spots and their relative gradients to create a quality measure for train brakes. In this contribution we present a new approach for a quality measurement system which is based on an image analysis and classification of infra-red based heat images. Brake images which are represented in pseudo-color are first transformed in a linear grayscale space by a hue-saturation-intensity (HSI) space. This transform is necessary for the following gradient analysis which is based on gray scale gradient filters. Furthermore, different features based on Haralick's measures are generated from the gray scale and gradient images. A following Fuzzy-Pattern-Classifier is used for the classification of good and bad brakes. It has to be pointed out that the classifier returns a score value for each brake which is between 0 and 100% good quality. This fact guarantees that not only good and bad bakes can be distinguished, but also their quality can be labeled. The results show that all critical thermal patterns of train brakes can be sensed and verified.
A Regression-Based Family of Measures for Full-Reference Image Quality Assessment
NASA Astrophysics Data System (ADS)
Oszust, Mariusz
2016-12-01
The advances in the development of imaging devices resulted in the need of an automatic quality evaluation of displayed visual content in a way that is consistent with human visual perception. In this paper, an approach to full-reference image quality assessment (IQA) is proposed, in which several IQA measures, representing different approaches to modelling human visual perception, are efficiently combined in order to produce objective quality evaluation of examined images, which is highly correlated with evaluation provided by human subjects. In the paper, an optimisation problem of selection of several IQA measures for creating a regression-based IQA hybrid measure, or a multimeasure, is defined and solved using a genetic algorithm. Experimental evaluation on four largest IQA benchmarks reveals that the multimeasures obtained using the proposed approach outperform state-of-the-art full-reference IQA techniques, including other recently developed fusion approaches.
Single image super-resolution reconstruction algorithm based on eage selection
NASA Astrophysics Data System (ADS)
Zhang, Yaolan; Liu, Yijun
2017-05-01
Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-01-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
Coding and transmission of subband coded images on the Internet
NASA Astrophysics Data System (ADS)
Wah, Benjamin W.; Su, Xiao
2001-09-01
Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.
Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin
2017-07-01
Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, Ehsan, E-mail: samei@duke.edu; Lin, Yuan; Choudhury, Kingshuk R.
Purpose: The authors previously proposed an image-based technique [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] to assess the perceptual quality of clinical chest radiographs. In this study, an observer study was designed and conducted to validate the output of the program against rankings by expert radiologists and to establish the ranges of the output values that reflect the acceptable image appearance so the program output can be used for image quality optimization and tracking. Methods: Using an IRB-approved protocol, 2500 clinical chest radiographs (PA/AP) were collected from our clinical operation. The images were processed through our perceptual qualitymore » assessment program to measure their appearance in terms of ten metrics of perceptual image quality: lung gray level, lung detail, lung noise, rib–lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm–lung contrast, and subdiaphragm area. From the results, for each targeted appearance attribute/metric, 18 images were selected such that the images presented a relatively constant appearance with respect to all metrics except the targeted one. The images were then incorporated into a graphical user interface, which displayed them into three panels of six in a random order. Using a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions, each of five participating attending chest radiologists was tasked to spatially order the images based only on the targeted appearance attribute regardless of the other qualities. Once ordered, the observer also indicated the range of image appearances that he/she considered clinically acceptable. The observer data were analyzed in terms of the correlations between the observer and algorithmic rankings and interobserver variability. An observer-averaged acceptable image appearance was also statistically derived for each quality attribute based on the collected individual acceptable ranges. Results: The observer study indicated that, for each image quality attribute, the averaged observer ranking strongly correlated with the algorithmic ranking (linear correlation coefficient R > 0.92), with highest correlation (R = 1) for lung gray level and the lowest (R = 0.92) for mediastinum noise. There was a strong concordance between the observers in terms of their rankings (i.e., Kendall’s tau agreement > 0.84). The observers also generally indicated similar tolerance and preference levels in terms of acceptable ranges, as 85% of the values were close to the overall tolerance or preference levels and the differences were smaller than 0.15. Conclusions: The observer study indicates that the previously proposed technique provides a robust reflection of the perceptual image quality in clinical images. The results established the range of algorithmic outputs for each metric that can be used to quantitatively assess and qualify the appearance quality of clinical chest radiographs.« less
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
Prostate seed implant quality assessment using MR and CT image fusion.
Amdur, R J; Gladstone, D; Leopold, K A; Harris, R D
1999-01-01
After a seed implant of the prostate, computerized tomography (CT) is ideal for determining seed distribution but soft tissue anatomy is frequently not well visualized. Magnetic resonance (MR) images soft tissue anatomy well but seed visualization is problematic. We describe a method of fusing CT and MR images to exploit the advantages of both of these modalities when assessing the quality of a prostate seed implant. Eleven consecutive prostate seed implant patients were imaged with axial MR and CT scans. MR and CT images were fused in three dimensions using the Pinnacle 3.0 version of the ADAC treatment planning system. The urethra and bladder base were used to "line up" MR and CT image sets during image fusion. Alignment was accomplished using translation and rotation in the three ortho-normal planes. Accuracy of image fusion was evaluated by calculating the maximum deviation in millimeters between the center of the urethra on axial MR versus CT images. Implant quality was determined by comparing dosimetric results to previously set parameters. Image fusion was performed with a high degree of accuracy. When lining up the urethra and base of bladder, the maximum difference in axial position of the urethra between MR and CT averaged 2.5 mm (range 1.3-4.0 mm, SD 0.9 mm). By projecting CT-derived dose distributions over MR images of soft tissue structures, qualitative and quantitative evaluation of implant quality is straightforward. The image-fusion process we describe provides a sophisticated way of assessing the quality of a prostate seed implant. Commercial software makes the process time-efficient and available to any clinical practice with a high-quality treatment planning system. While we use MR to image soft tissue structures, the process could be used with any imaging modality that is able to visualize the prostatic urethra (e.g., ultrasound).
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
A visual grading study for different administered activity levels in bone scintigraphy.
Gustafsson, Agnetha; Karlsson, Henrik; Nilsson, Kerstin A; Geijer, Håkan; Olsson, Anna
2015-05-01
The aim of the study is to assess the administered activity levels versus visual-based image quality using visual grading regression (VGR) including an assessment of the newly stated image criteria for whole-body bone scintigraphy. A total of 90 patients was included and grouped in three levels of administered activity: 400, 500 and 600 MBq. Six clinical image criteria regarding image quality was formulated by experienced nuclear medicine physicians. Visual grading was performed in all images, where three physicians rated the fulfilment of the image criteria on a four-step ordinal scale. The results were analysed using VGR. A count analysis was also made where the total number of counts in both views was registered. The administered activity of 600 MBq gives significantly better image quality than 400 MBq in five of six criteria (P<0·05). Comparing the administered activity of 600 MBq to 500 MBq, four criteria of six show significantly better image quality (P<0·05). The administered activity of 500 MBq gives no significantly better image quality than 400 Mbq (P<0·05). The count analysis shows that none of the three levels of administrated activity fulfil the recommendations by the EANM. There was a significant improvement in perceived image quality using an activity level of 600 MBq compared to lower activity levels in whole-body bone scintigraphy for the gamma camera equipment end set-up used in this study. This type of visual-based grading study seems to be a valuable tool and easy to implement in the clinical environment. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Neuhaus, Victor; Große Hokamp, Nils; Abdullayev, Nuran; Maus, Volker; Kabbasch, Christoph; Mpotsaris, Anastasios; Maintz, David; Borggrefe, Jan
2018-03-01
To compare the image quality of virtual monoenergetic images and polyenergetic images reconstructed from dual-layer detector CT angiography (DLCTA). Thirty patients who underwent DLCTA of the head and neck were retrospectively identified and polyenergetic as well as virtual monoenergetic images (40 to 120 keV) were reconstructed. Signals (± SD) of the cervical and cerebral vessels as well as lateral pterygoid muscle and the air surrounding the head were measured to calculate the CNR and SNR. In addition, subjective image quality was assessed using a 5-point Likert scale. Student's t-test and Wilcoxon test were used to determine statistical significance. Compared to polyenergetic images, although noise increased with lower keV, CNR (p < 0.02) and SNR (p > 0.05) of the cervical, petrous and intracranial vessels were improved in virtual monoenergetic images at 40 keV and virtual monoenergetic images at 45 keV were also rated superior regarding vascular contrast, assessment of arteries close to the skull base and small arterial branches (p < 0.0001 each). Compared to polyenergetic images, virtual monoenergetic images reconstructed from DLCTA at low keV ranging from 40 to 45 keV improve the objective and subjective image quality of extra- and intracranial vessels and facilitate assessment of vessels close to the skull base and of small arterial branches. • Virtual monoenergetic images greatly improve attenuation, while noise only slightly increases. • Virtual monoenergetic images show superior contrast-to-noise ratios compared to polyenergetic images. • Virtual monoenergetic images significantly improve image quality at low keV.
An exposure indicator for digital radiography: AAPM Task Group 116 (executive summary).
Shepard, S Jeff; Wang, Jihong; Flynn, Michael; Gingold, Eric; Goldman, Lee; Krugh, Kerry; Leong, David L; Mah, Eugene; Ogden, Kent; Peck, Donald; Samei, Ehsan; Wang, Jihong; Willis, Charles E
2009-07-01
Digital radiographic imaging systems, such as those using photostimulable storage phosphor, amorphous selenium, amorphous silicon, CCD, and MOSFET technology, can produce adequate image quality over a much broader range of exposure levels than that of screen/film imaging systems. In screen/film imaging, the final image brightness and contrast are indicative of over- and underexposure. In digital imaging, brightness and contrast are often determined entirely by digital postprocessing of the acquired image data. Overexposure and underexposures are not readily recognizable. As a result, patient dose has a tendency to gradually increase over time after a department converts from screen/film-based imaging to digital radiographic imaging. The purpose of this report is to recommend a standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data. The intent is to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses. This should be based not on image optical density or brightness but on feedback regarding the detector exposure provided and actively monitored by the imaging system. A standard beam calibration condition is recommended that is based on RQA5 but uses filtration materials that are commonly available and simple to use. Recommendations on clinical implementation of the indices to control image quality and patient dose are derived from historical tolerance limits and presented as guidelines.
An exposure indicator for digital radiography: AAPM Task Group 116 (Executive Summary)
Shepard, S. Jeff; Wang, Jihong; Flynn, Michael; Gingold, Eric; Goldman, Lee; Krugh, Kerry; Leong, David L.; Mah, Eugene; Ogden, Kent; Peck, Donald; Samei, Ehsan; Wang, Jihong; Willis, Charles E.
2009-01-01
Digital radiographic imaging systems, such as those using photostimulable storage phosphor, amorphous selenium, amorphous silicon, CCD, and MOSFET technology, can produce adequate image quality over a much broader range of exposure levels than that of screen/film imaging systems. In screen/film imaging, the final image brightness and contrast are indicative of over- and underexposure. In digital imaging, brightness and contrast are often determined entirely by digital postprocessing of the acquired image data. Overexposure and underexposures are not readily recognizable. As a result, patient dose has a tendency to gradually increase over time after a department converts from screen/film-based imaging to digital radiographic imaging. The purpose of this report is to recommend a standard indicator which reflects the radiation exposure that is incident on a detector after every exposure event and that reflects the noise levels present in the image data. The intent is to facilitate the production of consistent, high quality digital radiographic images at acceptable patient doses. This should be based not on image optical density or brightness but on feedback regarding the detector exposure provided and actively monitored by the imaging system. A standard beam calibration condition is recommended that is based on RQA5 but uses filtration materials that are commonly available and simple to use. Recommendations on clinical implementation of the indices to control image quality and patient dose are derived from historical tolerance limits and presented as guidelines. PMID:19673189
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Reduced reference image quality assessment via sub-image similarity based redundancy measurement
NASA Astrophysics Data System (ADS)
Mou, Xuanqin; Xue, Wufeng; Zhang, Lei
2012-03-01
The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
A comparative study of multi-focus image fusion validation metrics
NASA Astrophysics Data System (ADS)
Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael
2016-05-01
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Golestaneh, S Alireza; Karam, Lina
2016-08-24
Perceptual image quality assessment (IQA) attempts to use computational models to estimate the image quality in accordance with subjective evaluations. Reduced-reference (RR) image quality assessment (IQA) methods make use of partial information or features extracted from the reference image for estimating the quality of distorted images. Finding a balance between the number of RR features and accuracy of the estimated image quality is essential and important in IQA. In this paper we propose a training-free low-cost RRIQA method that requires a very small number of RR features (6 RR features). The proposed RRIQA algorithm is based on the discrete wavelet transform (DWT) of locally weighted gradient magnitudes.We apply human visual system's contrast sensitivity and neighborhood gradient information to weight the gradient magnitudes in a locally adaptive manner. The RR features are computed by measuring the entropy of each DWT subband, for each scale, and pooling the subband entropies along all orientations, resulting in L RR features (one average entropy per scale) for an L-level DWT. Extensive experiments performed on seven large-scale benchmark databases demonstrate that the proposed RRIQA method delivers highly competitive performance as compared to the state-of-the-art RRIQA models as well as full reference ones for both natural and texture images. The MATLAB source code of REDLOG and the evaluation results are publicly available online at https://http://lab.engineering.asu.edu/ivulab/software/redlog/.
Neuroradiology Using Secure Mobile Device Review.
Randhawa, Privia A; Morrish, William; Lysack, John T; Hu, William; Goyal, Mayank; Hill, Michael D
2016-04-05
Image review on computer-based workstations has made film-based review outdated. Despite advances in technology, the lack of portability of digital workstations creates an inherent disadvantage. As such, we sought to determine if the quality of image review on a handheld device is adequate for routine clinical use. Six CT/CTA cases and six MR/MRA cases were independently reviewed by three neuroradiologists in varying environments: high and low ambient light using a handheld device and on a traditional imaging workstation in ideal conditions. On first review (using a handheld device in high ambient light), a preliminary diagnosis for each case was made. Upon changes in review conditions, neuroradiologists were asked if any additional features were seen that changed their initial diagnoses. Reviewers were also asked to comment on overall clinical quality and if the handheld display was of acceptable quality for image review. After the initial CT review in high ambient light, additional findings were reported in 2 of 18 instances on subsequent reviews. Similarly, additional findings were identified in 4 of 18 instances after the initial MR review in high ambient lighting. Only one of these six additional findings contributed to the diagnosis made on the initial preliminary review. Use of a handheld device for image review is of adequate diagnostic quality based on image contrast, sharpness of structures, visible artefacts and overall display quality. Although reviewers were comfortable with using this technology, a handheld device with a larger screen may be diagnostically superior.
Image degradation characteristics and restoration based on regularization for diffractive imaging
NASA Astrophysics Data System (ADS)
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
Information recovery through image sequence fusion under wavelet transformation
NASA Astrophysics Data System (ADS)
He, Qiang
2010-04-01
Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.
Kiely, Daniel J; Stephanson, Kirk; Ross, Sue
2011-10-01
Low-cost laparoscopic box trainers built using home computers and webcams may provide residents with a useful tool for practice at home. This study set out to evaluate the image quality of low-cost laparoscopic box trainers compared with a commercially available model. Five low-cost laparoscopic box trainers including the components listed were compared in random order to one commercially available box trainer: A (high-definition USB 2.0 webcam, PC laptop), B (Firewire webcam, Mac laptop), C (high-definition USB 2.0 webcam, Mac laptop), D (standard USB webcam, PC desktop), E (Firewire webcam, PC desktop), and F (the TRLCD03 3-DMEd Standard Minimally Invasive Training System). Participants observed still image quality and performed a peg transfer task using each box trainer. Participants rated still image quality, image quality with motion, and whether the box trainer had sufficient image quality to be useful for training. Sixteen residents in obstetrics and gynecology took part in the study. The box trainers showing no statistically significant difference from the commercially available model were A, B, C, D, and E for still image quality; A for image quality with motion; and A and B for usefulness of the simulator based on image quality. The cost of the box trainers A-E is approximately $100 to $160 each, not including a computer or laparoscopic instruments. Laparoscopic box trainers built from a high-definition USB 2.0 webcam with a PC (box trainer A) or from a Firewire webcam with a Mac (box trainer B) provide image quality comparable with a commercial standard.
Information Hiding: an Annotated Bibliography
1999-04-13
parameters needed for reconstruction are enciphered using DES . The encrypted image is hidden in a cover image . [153] 074115, ‘Watermarking algorithm ...authors present a block based watermarking algorithm for digital images . The D.C.T. of the block is increased by a certain value. Quality control is...includes evaluation of the watermark robustness and the subjec- tive visual image quality. Two algorithms use the frequency domain while the two others use
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage
NASA Astrophysics Data System (ADS)
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution.
Yoo, Boyeol; Son, Kihong; Pua, Rizza; Kim, Jinsung; Solodov, Alexander; Cho, Seungryong
2016-10-01
With the increased use of computed tomography (CT) in clinics, dose reduction is the most important feature people seek when considering new CT techniques or applications. We developed an intensity-weighted region-of-interest (IWROI) imaging method in an exact half-fan geometry to reduce the imaging radiation dose to patients in cone-beam CT (CBCT) for image-guided radiation therapy (IGRT). While dose reduction is highly desirable, preserving the high-quality images of the ROI is also important for target localization in IGRT. An intensity-weighting (IW) filter made of copper was mounted in place of a bowtie filter on the X-ray tube unit of an on-board imager (OBI) system such that the filter can substantially reduce radiation exposure to the outer ROI. In addition to mounting the IW filter, the lead-blade collimation of the OBI was adjusted to produce an exact half-fan scanning geometry for a further reduction of the radiation dose. The chord-based rebinned backprojection-filtration (BPF) algorithm in circular CBCT was implemented for image reconstruction, and a humanoid pelvis phantom was used for the IWROI imaging experiment. The IWROI image of the phantom was successfully reconstructed after beam-quality correction, and it was registered to the reference image within an acceptable level of tolerance. Dosimetric measurements revealed that the dose is reduced by approximately 61% in the inner ROI and by 73% in the outer ROI compared to the conventional bowtie filter-based half-fan scan. The IWROI method substantially reduces the imaging radiation dose and provides reconstructed images with an acceptable level of quality for patient setup and target localization. The proposed half-fan-based IWROI imaging technique can add a valuable option to CBCT in IGRT applications.
Quality assurance in mammography: College of Radiology Survey in Malaysia.
Ho, E L M; Ng, K H; Wong, J H D; Wang, H B
2006-06-01
Malaysia's mammography QA practice was surveyed based on the Malaysian Ministry of Health and the American College of Radiology (ACR) requirements. Data on mammography unit, processor, image receptor, exposure factors, mean glandular dose (MGD), sensitometry, image quality and viewbox luminance were obtained. Mean developer temperature and cycle time were 34.1 +/- 1.8degreesC and 107.7 +/- 33.2 seconds. Mean base+fog level, speed index and contrast index were 0.20+/-0.01, 1.20+/-0.01 and 1.33+/-0.26 respectively. Eighty-six percent of the fifty centres passed the image quality test while 12.5% complied with ACR recommended viewbox luminance. Average MGD was 1.0+/-0.4 mGy. Malaysia is on the right track for QA but with room for total quality improvement.
Sakurai, T; Kawamata, R; Kozai, Y; Kaku, Y; Nakamura, K; Saito, M; Wakao, H; Kashima, I
2010-05-01
The aim of the study was to clarify the change in image quality upon X-ray dose reduction and to re-analyse the possibility of X-ray dose reduction in photostimulable phosphor luminescence (PSPL) X-ray imaging systems. In addition, the study attempted to verify the usefulness of multiobjective frequency processing (MFP) and flexible noise control (FNC) for X-ray dose reduction. Three PSPL X-ray imaging systems were used in this study. Modulation transfer function (MTF), noise equivalent number of quanta (NEQ) and detective quantum efficiency (DQE) were evaluated to compare the basic physical performance of each system. Subjective visual evaluation of diagnostic ability for normal anatomical structures was performed. The NEQ, DQE and diagnostic ability were evaluated at base X-ray dose, and 1/3, 1/10 and 1/20 of the base X-ray dose. The MTF of the systems did not differ significantly. The NEQ and DQE did not necessarily depend on the pixel size of the system. The images from all three systems had a higher diagnostic utility compared with conventional film images at the base and 1/3 X-ray doses. The subjective image quality was better at the base X-ray dose than at 1/3 of the base dose in all systems. The MFP and FNC-processed images had a higher diagnostic utility than the images without MFP and FNC. The use of PSPL imaging systems may allow a reduction in the X-ray dose to one-third of that required for conventional film. It is suggested that MFP and FNC are useful for radiation dose reduction.
Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes.
Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146
Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.
Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi
2018-04-01
In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.
NASA Astrophysics Data System (ADS)
Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko
We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.
Gloss uniformity measurement update for ISO/IEC 19751
NASA Astrophysics Data System (ADS)
Ng, Yee S.; Cui, Chengwu; Kuo, Chunghui; Maggard, Eric; Mashtare, Dale; Morris, Peter
2005-01-01
To address the standardization issues of perceptually based image quality for printing systems, ISO/IEC JTC1/SC28, the standardization committee for office equipment chartered the W1.1 project with the responsibility of drafting a proposal for an international standard for the evaluation of printed image quality1. An ISO draft Standard2, ISO/WD 19751-1, Office Equipment - Appearance-based image quality standards for printers - Part 1: Overview, Procedure and Common Methods, 2004 describes the overview of this multi-part appearance-based image quality standard. One of the ISO 19751 multi-part Standard"s tasks is to address the appearance-based gloss and gloss uniformity issues (in ISO 19751-2). This paper summarizes the current status and technical progress since the last two updates3, 4. In particular, we will be discussion our attempt to include 75 degree gloss (G75) objective measurement5 in differential gloss and within-page gloss uniformity. The result for a round-robin experiment involving objective measurement of differential gloss using G60 and G75 gloss measurement geometry is described. The results for two perceptual-based round-robin experiments relating to haze effect on the perception of gloss, and gloss artifacts (gloss streaks/bands, gloss graininess/mottle) are discussed.
Gloss uniformity measurement update for ISO/IEC 19751
NASA Astrophysics Data System (ADS)
Ng, Yee S.; Cui, Chengwu; Kuo, Chunghui; Maggard, Eric; Mashtare, Dale; Morris, Peter
2004-10-01
To address the standardization issues of perceptually based image quality for printing systems, ISO/IEC JTC1/SC28, the standardization committee for office equipment chartered the W1.1 project with the responsibility of drafting a proposal for an international standard for the evaluation of printed image quality1. An ISO draft Standard2, ISO/WD 19751-1, Office Equipment - Appearance-based image quality standards for printers - Part 1: Overview, Procedure and Common Methods, 2004 describes the overview of this multi-part appearance-based image quality standard. One of the ISO 19751 multi-part Standard"s tasks is to address the appearance-based gloss and gloss uniformity issues (in ISO 19751-2). This paper summarizes the current status and technical progress since the last two updates3, 4. In particular, we will be discussion our attempt to include 75 degree gloss (G75) objective measurement5 in differential gloss and within-page gloss uniformity. The result for a round-robin experiment involving objective measurement of differential gloss using G60 and G75 gloss measurement geometry is described. The results for two perceptual-based round-robin experiments relating to haze effect on the perception of gloss, and gloss artifacts (gloss streaks/bands, gloss graininess/mottle) are discussed.
Image resolution enhancement via image restoration using neural network
NASA Astrophysics Data System (ADS)
Zhang, Shuangteng; Lu, Yihong
2011-04-01
Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Restoration of color in a remote sensing image and its quality evaluation
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe
2003-09-01
This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.
The role of Imaging and Radiation Oncology Core for precision medicine era of clinical trial
Rosen, Mark
2017-01-01
Imaging and Radiation Oncology Core (IROC) services have been established for the quality assurance (QA) of imaging and radiotherapy (RT) for NCI’s Clinical Trial Network (NCTN) for any trials that contain imaging or RT. The randomized clinical trial is the gold standard for evidence-based medicine. QA ensures data quality, preventing noise from inferior treatments obscuring clinical trial outcome. QA is also found to be cost-effective. IROC has made great progress in multi-institution standardization and is expected to lead QA standardization, QA science in imaging and RT and to advance quality data analysis with big data in the future. The QA in the era of precision medicine is of paramount importance, when individualized decision making may depend on the quality and accuracy of RT and imaging. PMID:29218265
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
An image-based approach to understanding the physics of MR artifacts.
Morelli, John N; Runge, Val M; Ai, Fei; Attenberger, Ulrike; Vu, Lan; Schmeets, Stuart H; Nitz, Wolfgang R; Kirsch, John E
2011-01-01
As clinical magnetic resonance (MR) imaging becomes more versatile and more complex, it is increasingly difficult to develop and maintain a thorough understanding of the physical principles that govern the changing technology. This is particularly true for practicing radiologists, whose primary obligation is to interpret clinical images and not necessarily to understand complex equations describing the underlying physics. Nevertheless, the physics of MR imaging plays an important role in clinical practice because it determines image quality, and suboptimal image quality may hinder accurate diagnosis. This article provides an image-based explanation of the physics underlying common MR imaging artifacts, offering simple solutions for remedying each type of artifact. Solutions that have emerged from recent technologic advances with which radiologists may not yet be familiar are described in detail. Types of artifacts discussed include those resulting from voluntary and involuntary patient motion, magnetic susceptibility, magnetic field inhomogeneities, gradient nonlinearity, standing waves, aliasing, chemical shift, and signal truncation. With an improved awareness and understanding of these artifacts, radiologists will be better able to modify MR imaging protocols so as to optimize clinical image quality, allowing greater confidence in diagnosis. Copyright © RSNA, 2011.
Ju, Yun Hye; Lee, Geewon; Lee, Ji Won; Hong, Seung Baek; Suh, Young Ju; Jeong, Yeon Joo
2018-05-01
Background Reducing radiation dose inevitably increases image noise, and thus, it is important in low-dose computed tomography (CT) to maintain image quality and lesion detection performance. Purpose To assess image quality and lesion conspicuity of ultra-low-dose CT with model-based iterative reconstruction (MBIR) and to determine a suitable protocol for lung screening CT. Material and Methods A total of 120 heavy smokers underwent lung screening CT and were randomly and equally assigned to one of five groups: group 1 = 120 kVp, 25 mAs, with FBP reconstruction; group 2 = 120 kVp, 10 mAs, with MBIR; group 3 = 100 kVp, 15 mAs, with MBIR; group 4 = 100 kVp, 10 mAs, with MBIR; and group 5 = 100 kVp, 5 mAs, with MBIR. Two radiologists evaluated intergroup differences with respect to radiation dose, image noise, image quality, and lesion conspicuity using the Kruskal-Wallis test and the Chi-square test. Results Effective doses were 61-87% lower in groups 2-5 than in group 1. Image noises in groups 1 and 5 were significantly higher than in the other groups ( P < 0.001). Overall image quality was best in group 1, but diagnostic acceptability of overall image qualities in groups 1-3 was not significantly different (all P values > 0.05). Lesion conspicuities were similar in groups 1-4, but were significantly poorer in group 5. Conclusion Lung screening CT with MBIR obtained at 100 kVp and 15 mAs enables a ∼60% reduction in radiation dose versus low-dose CT, while maintaining image quality and lesion conspicuity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christianson, O; Winslow, J; Samei, E
2014-06-15
Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using opticalmore » character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image quality across CT vendors.« less
Body image and college women's quality of life: The importance of being self-compassionate.
Duarte, Cristiana; Ferreira, Cláudia; Trindade, Inês A; Pinto-Gouveia, José
2015-06-01
This study explored self-compassion as a mediator between body dissatisfaction, social comparison based on body image and quality of life in 662 female college students. Path analysis revealed that while controlling for body mass index, self-compassion mediated the impact of body dissatisfaction and unfavourable social comparisons on psychological quality of life. The path model accounted for 33 per cent of psychological quality of life variance. Findings highlight the importance of self-compassion as a mechanism that may operate on the association between negative body image evaluations and young women's quality of life. © The Author(s) 2015.
Quality based approach for adaptive face recognition
NASA Astrophysics Data System (ADS)
Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.
2009-05-01
Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.
Quality assessment of color images based on the measure of just noticeable color difference
NASA Astrophysics Data System (ADS)
Chou, Chun-Hsien; Hsu, Yun-Hsiang
2014-01-01
Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.
Spread spectrum image watermarking based on perceptual quality metric.
Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi
2011-11-01
Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.
Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging
Dempsey, Graham T.; Vaughan, Joshua C.; Chen, Kok Hao; Bates, Mark; Zhuang, Xiaowei
2011-01-01
One approach to super-resolution fluorescence imaging uses sequential activation and localization of individual fluorophores to achieve high spatial resolution. Essential to this technique is the choice of fluorescent probes — the properties of the probes, including photons per switching event, on/off duty cycle, photostability, and number of switching cycles, largely dictate the quality of super-resolution images. While many probes have been reported, a systematic characterization of the properties of these probes and their impact on super-resolution image quality has been described in only a few cases. Here, we quantitatively characterized the switching properties of 26 organic dyes and directly related these properties to the quality of super-resolution images. This analysis provides a set of guidelines for characterization of super-resolution probes and a resource for selecting probes based on performance. Our evaluation identified several photoswitchable dyes with good to excellent performance in four independent spectral ranges, with which we demonstrated low crosstalk, four-color super-resolution imaging. PMID:22056676
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H
2014-06-15
Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less
Kwon, Heejin; Reid, Scott; Kim, Dongeun; Lee, Sangyun; Cho, Jinhan; Oh, Jongyeong
2018-01-04
This study aimed to evaluate image quality and diagnostic performance of a recently developed navigated three-dimensional magnetic resonance cholangiopancreatography (3D-MRCP) with compressed sensing (CS) based on parallel imaging (PI) and conventional 3D-MRCP with PI only in patients with abnormal bile duct dilatation. This institutional review board-approved study included 45 consecutive patients [non-malignant common bile duct lesions (n = 21) and malignant common bile duct lesions (n = 24)] who underwent MRCP of the abdomen to evaluate bile duct dilatation. All patients were imaged at 3T (MR 750, GE Healthcare, Waukesha, WI) including two kinds of 3D-MRCP using 352 × 288 matrices with and without CS based on PI. Two radiologists independently and blindly assessed randomized images. CS acceleration reduced the acquisition time on average 5 min and 6 s to a total of 2 min and 56 s. The all CS cine image quality was significantly higher than standard cine MR image for all quantitative measurements. Diagnostic accuracy for benign and malignant lesions is statistically different between standard and CS 3D-MRCP. Total image quality and diagnostic accuracy at biliary obstruction evaluation demonstrates that CS-accelerated 3D-MRCP sequences can provide superior quality of diagnostic information in 42.5% less time. This has the potential to reduce motion-related artifacts and improve diagnostic efficacy.
Application of furniture images selection based on neural network
NASA Astrophysics Data System (ADS)
Wang, Yong; Gao, Wenwen; Wang, Ying
2018-05-01
In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
SkySat-1: very high-resolution imagery from a small satellite
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk
2014-10-01
This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.
Miraglia, Roberto; Maruzzelli, Luigi; Cortis, Kelvin; Tafaro, Corrado; Gerasia, Roberta; Parisi, Carmelo; Luca, Angelo
2015-08-01
To determine whether the use of a low-dose acquisition protocol (LDP) in digital subtraction angiography during transjugular intrahepatic portosystemic shunt (TIPS) creation/revision results in significant reduction of patient radiation exposure and adequate image quality, as compared to a default reference standard-dose acquisition protocol (SDP). Two angiographic runs were performed during TIPS creation/revision: the first following catheterization of the portal venous system and the second after stent deployment/angioplasty. Constant field of view, object to image-detector distance, and source to image-receptor distance were maintained in each patient during the two angiographic runs. 17 consecutive adult patients who underwent TIPS creation (n = 11) or TIPS revision (n = 6) from December 2013 to March 2014 were considered eligible for this single centre prospective study. In each patient, the LDP and the SDP were used in a random order for the two runs, with each patient serving as his/her own control. The dose-area product (DAP) was calculated for each image and compared. Image quality was graded by two interventional radiologists other than the operator. In all runs acquired with the LDP, image quality was considered adequate for a successful procedural outcome. The DAP per image of the LDP was numerically inferior as compared to the DAP per image of the SDP in all patients. The mean reduction in DAP per image was 75.24% ± 5.7% (p < 0. 001). Radiation exposure during TIPS creation/revision was significantly reduced by selecting a LDP in our flat-panel detector-based system, while maintaining adequate image quality.
Evaluation method based on the image correlation for laser jamming image
NASA Astrophysics Data System (ADS)
Che, Jinxi; Li, Zhongmin; Gao, Bo
2013-09-01
The jamming effectiveness evaluation of infrared imaging system is an important part of electro-optical countermeasure. The infrared imaging devices in the military are widely used in the searching, tracking and guidance and so many other fields. At the same time, with the continuous development of laser technology, research of laser interference and damage effect developed continuously, laser has been used to disturbing the infrared imaging device. Therefore, the effect evaluation of the infrared imaging system by laser has become a meaningful problem to be solved. The information that the infrared imaging system ultimately present to the user is an image, so the evaluation on jamming effect can be made from the point of assessment of image quality. The image contains two aspects of the information, the light amplitude and light phase, so the image correlation can accurately perform the difference between the original image and disturbed image. In the paper, the evaluation method of digital image correlation, the assessment method of image quality based on Fourier transform, the estimate method of image quality based on error statistic and the evaluation method of based on peak signal noise ratio are analysed. In addition, the advantages and disadvantages of these methods are analysed. Moreover, the infrared disturbing images of the experiment result, in which the thermal infrared imager was interfered by laser, were analysed by using these methods. The results show that the methods can better reflect the jamming effects of the infrared imaging system by laser. Furthermore, there is good consistence between evaluation results by using the methods and the results of subjective visual evaluation. And it also provides well repeatability and convenient quantitative analysis. The feasibility of the methods to evaluate the jamming effect was proved. It has some extent reference value for the studying and developing on electro-optical countermeasures equipments and effectiveness evaluation.
NASA Astrophysics Data System (ADS)
Wu, Xiaojun; Wu, Yumei; Wen, Peizhi
2018-03-01
To obtain information on the outer surface of a cylinder object, we propose a catadioptric panoramic imaging system based on the principle of uniform spatial resolution for vertical scenes. First, the influence of the projection-equation coefficients on the spatial resolution and astigmatism of the panoramic system are discussed, respectively. Through parameter optimization, we obtain the appropriate coefficients for the projection equation, and so the imaging quality of the entire imaging system can reach an optimum value. Finally, the system projection equation is calibrated, and an undistorted rectangular panoramic image is obtained using the cylindrical-surface projection expansion method. The proposed 360-deg panoramic-imaging device overcomes the shortcomings of existing surface panoramic-imaging methods, and it has the advantages of low cost, simple structure, high imaging quality, and small distortion, etc. The experimental results show the effectiveness of the proposed method.
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
Application of near-infrared image processing in agricultural engineering
NASA Astrophysics Data System (ADS)
Chen, Ming-hong; Zhang, Guo-ping; Xia, Hongxing
2009-07-01
Recently, with development of computer technology, the application field of near-infrared image processing becomes much wider. In this paper the technical characteristic and development of modern NIR imaging and NIR spectroscopy analysis were introduced. It is concluded application and studying of the NIR imaging processing technique in the agricultural engineering in recent years, base on the application principle and developing characteristic of near-infrared image. The NIR imaging would be very useful in the nondestructive external and internal quality inspecting of agricultural products. It is important to detect stored-grain insects by the application of near-infrared spectroscopy. Computer vision detection base on the NIR imaging would be help to manage food logistics. Application of NIR imaging promoted quality management of agricultural products. In the further application research fields of NIR image in the agricultural engineering, Some advices and prospect were put forward.
Imaging with a small number of photons
Morris, Peter A.; Aspden, Reuben S.; Bell, Jessica E. C.; Boyd, Robert W.; Padgett, Miles J.
2015-01-01
Low-light-level imaging techniques have application in many diverse fields, ranging from biological sciences to security. A high-quality digital camera based on a multi-megapixel array will typically record an image by collecting of order 105 photons per pixel, but by how much could this photon flux be reduced? In this work we demonstrate a single-photon imaging system based on a time-gated intensified camera from which the image of an object can be inferred from very few detected photons. We show that a ghost-imaging configuration, where the image is obtained from photons that have never interacted with the object, is a useful approach for obtaining images with high signal-to-noise ratios. The use of heralded single photons ensures that the background counts can be virtually eliminated from the recorded images. By applying principles of image compression and associated image reconstruction, we obtain high-quality images of objects from raw data formed from an average of fewer than one detected photon per image pixel. PMID:25557090
High-quality JPEG compression history detection for fake uncompressed images
NASA Astrophysics Data System (ADS)
Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan
2017-05-01
Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.
Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.
Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique
Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Optimisation of radiation dose and image quality in mobile neonatal chest radiography.
Hinojos-Armendáriz, V I; Mejía-Rosales, S J; Franco-Cabrera, M C
2018-05-01
To optimise the radiation dose and image quality for chest radiography in the neonatal intensive care unit (NICU) by increasing the mean beam energy. Two techniques for the acquisition of NICU AP chest X-ray images were compared for image quality and radiation dose. 73 images were acquired using a standard technique (56 kV, 3.2 mAs and no additional filtration) and 90 images with a new technique (62 kV, 2 mAs and 2 mm Al filtration). The entrance surface air kerma (ESAK) was measured using a phantom and compared between the techniques and against established diagnostic reference levels (DRL). Images were evaluated using seven image quality criteria independently by three radiologists. Images quality and radiation dose were compared statistically between the standard and new techniques. The maximum ESAK for the new technique was 40.20 μGy, 43.7% of the ESAK of the standard technique. Statistical evaluation demonstrated no significant differences in image quality between the two acquisition techniques. Based on the techniques and acquisition factors investigated within this study, it is possible to lower the radiation dose without any significant effects on image quality by adding filtration (2 mm Al) and increasing the tube potential. Such steps are relatively simple to undertake and as such, other departments should consider testing and implementing this dose reduction strategy within clinical practice where appropriate. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
2013-01-01
Background Molecular imaging using magnetic nanoparticles (MNPs)—magnetic particle imaging (MPI)—has attracted interest for the early diagnosis of cancer and cardiovascular disease. However, because a steep local magnetic field distribution is required to obtain a defined image, sophisticated hardware is required. Therefore, it is desirable to realize excellent image quality even with low-performance hardware. In this study, the spatial resolution of MPI was evaluated using an image reconstruction method based on the correlation information of the magnetization signal in a time domain and by applying MNP samples made from biocompatible ferucarbotran that have adjusted particle diameters. Methods The magnetization characteristics and particle diameters of four types of MNP samples made from ferucarbotran were evaluated. A numerical analysis based on our proposed method that calculates the image intensity from correlation information between the magnetization signal generated from MNPs and the system function was attempted, and the obtained image quality was compared with that using the prototype in terms of image resolution and image artifacts. Results MNP samples obtained by adjusting ferucarbotran showed superior properties to conventional ferucarbotran samples, and numerical analysis showed that the same image quality could be obtained using a gradient magnetic field generator with 0.6 times the performance. However, because image blurring was included theoretically by the proposed method, an algorithm will be required to improve performance. Conclusions MNP samples obtained by adjusting ferucarbotran showed magnetizing properties superior to conventional ferucarbotran samples, and by using such samples, comparable image quality (spatial resolution) could be obtained with a lower gradient magnetic field intensity. PMID:23734917
Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.; Jayani, A. P. S.
2018-05-01
LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.
Impact on dose and image quality of a software-based scatter correction in mammography.
Monserrat, Teresa; Prieto, Elena; Barbés, Benigno; Pina, Luis; Elizalde, Arlette; Fernández, Belén
2018-06-01
Background In 2014, Siemens developed a new software-based scatter correction (Progressive Reconstruction Intelligently Minimizing Exposure [PRIME]), enabling grid-less digital mammography. Purpose To compare doses and image quality between PRIME (grid-less) and standard (with anti-scatter grid) modes. Material and Methods Contrast-to-noise ratio (CNR) was measured for various polymethylmethacrylate (PMMA) thicknesses and dose values provided by the mammograph were recorded. CDMAM phantom images were acquired for various PMMA thicknesses and inverse Image Quality Figure (IQF inv ) was calculated. Values of incident entrance surface air kerma (ESAK) and average glandular dose (AGD) were obtained from the DICOM header for a total of 1088 pairs of clinical cases. Two experienced radiologists compared subjectively the image quality of a total of 149 pairs of clinical cases. Results CNR values were higher and doses were lower in PRIME mode for all thicknesses. IQF inv values in PRIME mode were lower for all thicknesses except for 40 mm of PMMA equivalent, in which IQF inv was slightly greater in PRIME mode. A mean reduction of 10% in ESAK and 12% in AGD in PRIME mode with respect to standard mode was obtained. The clinical image quality in PRIME and standard acquisitions resulted to be similar in most of the cases (84% for the first radiologist and 67% for the second one). Conclusion The use of PRIME software reduces, in average, the dose of radiation to the breast without affecting image quality. This reduction is greater for thinner and denser breasts.
Scanning electron microscope image signal-to-noise ratio monitoring for micro-nanomanipulation.
Marturi, Naresh; Dembélé, Sounkalo; Piat, Nadine
2014-01-01
As an imaging system, scanning electron microscope (SEM) performs an important role in autonomous micro-nanomanipulation applications. When it comes to the sub micrometer range and at high scanning speeds, the images produced by the SEM are noisy and need to be evaluated or corrected beforehand. In this article, the quality of images produced by a tungsten gun SEM has been evaluated by quantifying the level of image signal-to-noise ratio (SNR). In order to determine the SNR, an efficient and online monitoring method is developed based on the nonlinear filtering using a single image. Using this method, the quality of images produced by a tungsten gun SEM is monitored at different experimental conditions. The derived results demonstrate the developed method's efficiency in SNR quantification and illustrate the imaging quality evolution in SEM. © 2014 Wiley Periodicals, Inc.
A quality quantitative method of silicon direct bonding based on wavelet image analysis
NASA Astrophysics Data System (ADS)
Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing
2018-04-01
The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.
NASA Astrophysics Data System (ADS)
McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol
2016-06-01
Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.
Practical Considerations for Optic Nerve Estimation in Telemedicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Aykac, Deniz; Chaum, Edward
The projected increase in diabetes in the United States and worldwide has created a need for broad-based, inexpensive screening for diabetic retinopathy (DR), an eye disease which can lead to vision impairment. A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion / anomaly detection is a low-cost way of achieving broad-based screening. In this work we report on the effect of quality estimation on an optic nerve (ON) detection method with a confidence metric. We report on an improvement of the fusion technique using a data set from an ophthalmologists practice then show themore » results of the method as a function of image quality on a set of images from an on-line telemedicine network collected in Spring 2009 and another broad-based screening program. We show that the fusion method, combined with quality estimation processing, can improve detection performance and also provide a method for utilizing a physician-in-the-loop for images that may exceed the capabilities of automated processing.« less
Trigram-based algorithms for OCR result correction
NASA Astrophysics Data System (ADS)
Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Faradjev, Igor; Janiszewski, Igor
2017-03-01
In this paper we consider a task of improving optical character recognition (OCR) results of document fields on low-quality and average-quality images using N-gram models. Cyrillic fields of Russian Federation internal passport are analyzed as an example. Two approaches are presented: the first one is based on hypothesis of dependence of a symbol from two adjacent symbols and the second is based on calculation of marginal distributions and Bayesian networks computation. A comparison of the algorithms and experimental results within a real document OCR system are presented, it's showed that the document field OCR accuracy can be improved by more than 6% for low-quality images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
De Crop, An; Bacher, Klaus; Van Hoof, Tom; Smeets, Peter V; Smet, Barbara S; Vergauwen, Merel; Kiendys, Urszula; Duyck, Philippe; Verstraete, Koenraad; D'Herde, Katharina; Thierens, Hubert
2012-01-01
To determine the correlation between the clinical and physical image quality of chest images by using cadavers embalmed with the Thiel technique and a contrast-detail phantom. The use of human cadavers fulfilled the requirements of the institutional ethics committee. Clinical image quality was assessed by using three human cadavers embalmed with the Thiel technique, which results in excellent preservation of the flexibility and plasticity of organs and tissues. As a result, lungs can be inflated during image acquisition to simulate the pulmonary anatomy seen on a chest radiograph. Both contrast-detail phantom images and chest images of the Thiel-embalmed bodies were acquired with an amorphous silicon flat-panel detector. Tube voltage (70, 81, 90, 100, 113, 125 kVp), copper filtration (0.1, 0.2, 0.3 mm Cu), and exposure settings (200, 280, 400, 560, 800 speed class) were altered to simulate different quality levels. Four experienced radiologists assessed the image quality by using a visual grading analysis (VGA) technique based on European Quality Criteria for Chest Radiology. The phantom images were scored manually and automatically with use of dedicated software, both resulting in an inverse image quality figure (IQF). Spearman rank correlations between inverse IQFs and VGA scores were calculated. A statistically significant correlation (r = 0.80, P < .01) was observed between the VGA scores and the manually obtained inverse IQFs. Comparison of the VGA scores and the automated evaluated phantom images showed an even better correlation (r = 0.92, P < .001). The results support the value of contrast-detail phantom analysis for evaluating clinical image quality in chest radiography. © RSNA, 2011.
Face detection on distorted images using perceptual quality-aware features
NASA Astrophysics Data System (ADS)
Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.
2014-02-01
We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.
NASA Astrophysics Data System (ADS)
Yang, Feng; Zhang, Xiaofang; Huang, Yu; Hao, Weiwei; Guo, Baiwei
2012-11-01
Satellite platform vibration causes the image quality to be degraded, it is necessary to study its influence on image quality. The forms of Satellite platform vibration consist of linear vibration, sinusoidal vibration and random vibration. Based on Matlab & Zemax, the simulation system has been developed for simulating impact caused by satellite platform vibration on image quality. Dynamic Data Exchange is used for the communication between Matlab and Zemax. The data of sinusoidal vibration are produced by sinusoidal curve with specific amplitude and frequency. The data of random vibration are obtained by combining sinusoidal signals with 10Hz, 100Hz and 200Hz's frequency, 100, 12, 1.9's amplitude and white noise with zero mean value. Satellite platform vibration data which produced by Matlab are added to the optical system, and its point spread function can be obtained by Zemax. Blurred image can be gained by making the convolution of PSF and the original image. The definition of the original image and the blurred image are evaluated by using average gradient values of image gray. The impact caused by the sine and random vibration of six DOFs on the image quality are respectively simulated. The simulation result reveal that the decenter of X-, Y-, Z- direction and the tilt of Z-direction have a little effect on image quality, while the tilt of X-, Y- direction make image quality seriously degraded. Thus, it can be concluded that correcting the error of satellite platform vibration by FSM is a viable and effective way.
Partovi, Sasan; Kohan, Andres; Gaeta, Chiara; Rubbert, Christian; Vercher-Conejero, Jose L; Jones, Robert S; O'Donnell, James K; Wojtylak, Patrick; Faulhaber, Peter
2013-01-01
The purpose of this study is to systematically evaluate the usefulness of Positron emission tomography/Magnetic resonance imaging (PET/MRI) images in a clinical setting by assessing the image quality of Positron emission tomography (PET) images using a three-segment MR attenuation correction (MRAC) versus the standard CT attenuation correction (CTAC). We prospectively studied 48 patients who had their clinically scheduled FDG-PET/CT followed by an FDG-PET/MRI. Three nuclear radiologists evaluated the image quality of CTAC vs. MRAC using a Likert scale (five-point scale). A two-sided, paired t-test was performed for comparison purposes. The image quality was further assessed by categorizing it as acceptable (equal to 4 and 5 on the five-point Likert scale) or unacceptable (equal to 1, 2, and 3 on the five-point Likert scale) quality using the McNemar test. When assessing the image quality using the Likert scale, one reader observed a significant difference between CTAC and MRAC (p=0.0015), whereas the other readers did not observe a difference (p=0.8924 and p=0.1880, respectively). When performing the grouping analysis, no significant difference was found between CTAC vs. MRAC for any of the readers (p=0.6137 for reader 1, p=1 for reader 2, and p=0.8137 for reader 3). All three readers more often reported artifacts on the MRAC images than on the CTAC images. There was no clinically significant difference in quality between PET images generated on a PET/MRI system and those from a Positron emission tomography/Computed tomography (PET/CT) system. PET images using the automatic three-segmented MR attenuation method provided diagnostic image quality. However, future research regarding the image quality obtained using different MR attenuation based methods is warranted before PET/MRI can be used clinically.
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage.
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Complex adaptation-based LDR image rendering for 3D image reconstruction
NASA Astrophysics Data System (ADS)
Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik
2014-07-01
A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.
Probabilistic sparse matching for robust 3D/3D fusion in minimally invasive surgery.
Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan
2015-01-01
Classical surgery is being overtaken by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm computed tomography (CT) and C-arm fluoroscopy are routinely used in clinical practice for intraoperative guidance. However, due to constraints regarding acquisition time and device configuration, intraoperative modalities have limited soft tissue image quality and reliable assessment of the cardiac anatomy typically requires contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a probabilistic sparse matching approach to fuse high-quality preoperative CT images and nongated, noncontrast intraoperative C-arm CT images by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the preoperative CT and mapped to the intraoperative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments on 95 clinical datasets demonstrate that our model-based fusion approach has an average execution time of 1.56 s, while the accuracy of 5.48 mm between the anchor anatomy in both images lies within expert user confidence intervals. In direct comparison with image-to-image registration based on an open-source state-of-the-art medical imaging library and a recently proposed quasi-global, knowledge-driven multi-modal fusion approach for thoracic-abdominal images, our model-based method exhibits superior performance in terms of registration accuracy and robustness with respect to both target anatomy and anchor anatomy alignment errors.
X-ray imaging with amorphous silicon active matrix flat-panel imagers (AMFPIs)
NASA Astrophysics Data System (ADS)
El-Mohri, Youcef; Antonuk, Larry E.; Jee, Kyung-Wook; Maolinbay, Manat; Rong, Xiujiang; Siewerdsen, Jeffrey H.; Verma, Manav; Zhao, Qihua
1997-07-01
Recent advances in thin-film electronics technology have opened the way for the use of flat-panel imagers in a number of medical imaging applications. These novel imagers offer real time digital readout capabilities (˜30 frames per second), radiation hardness (>106cGy), large area (30×40 cm2) and compactness (˜1 cm). Such qualities make them strong candidates for the replacement of conventional x-ray imaging technologies such as film-screen and image intensifier systems. In this report, qualities and potential of amorphous silicon based active matrix flat-panel imagers are outlined for various applications such as radiation therapy, radiography, fluoroscopy and mammography.
Bremicker, K; Gosch, D; Kahn, T; Borte, G
2015-11-01
Chest radiography is the most common diagnostic modality in intensive care units with new mobile flat-panels gaining more attention and availability in addition to the already used storage phosphor plates. Comparison of the image quality of mobile flat-panels and needle-image plate storage phosphor system in terms of bedside chest radiography. Retrospective analysis of 84 bedside chest radiographs of 42 intensive care patients (20 women, 22 men, average age: 65 years). All images were acquired during daily routine. For each patient, two images were analyzed, one from each system mentioned above. Two blinded radiologists evaluated the image quality based on ten criteria (e.g., diaphragm, heart contour, tracheal bifurcation, thoracic spine, lung structure, consolidations, foreign material, and overall impression) using a 5-point visibility scale (1 = excellent, 5 = not usable). There was no significant difference between the image quality of the two systems (p < 0.05). Overall some anatomical structures such as the diaphragm, heart, pulmonary consolidations and foreign material were considered of higher diagnostic quality compared to others, e.g., tracheal bifurcation and thoracic spine. Mobile flat-panels achieve an image quality which is as good as those of needle-image plate storage phosphor systems. In addition, they allow immediate evaluation of the image quality but in return are much more expensive in terms of purchase and maintenance.
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243
Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong
2014-01-01
For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, Justin, E-mail: justin.solomon@duke.edu; Wilson, Joshua; Samei, Ehsan
2015-08-15
Purpose: The purpose of this work was to assess the inherent image quality characteristics of a new multidetector computed tomography system in terms of noise, resolution, and detectability index as a function of image acquisition and reconstruction for a range of clinically relevant settings. Methods: A multisized image quality phantom (37, 30, 23, 18.5, and 12 cm physical diameter) was imaged on a SOMATOM Force scanner (Siemens Medical Solutions) under variable dose, kVp, and tube current modulation settings. Images were reconstructed with filtered back projection (FBP) and with advanced modeled iterative reconstruction (ADMIRE) with iterative strengths of 3, 4, andmore » 5. Image quality was assessed in terms of the noise power spectrum (NPS), task transfer function (TTF), and detectability index for a range of detection tasks (contrasts of approximately 45, 90, 300, −900, and 1000 HU, and 2–20 mm diameter) based on a non-prewhitening matched filter model observer with eye filter. Results: Image noise magnitude decreased with decreasing phantom size, increasing dose, and increasing ADMIRE strength, offering up to 64% noise reduction relative to FBP. Noise texture in terms of the NPS was similar between FBP and ADMIRE (<5% shift in peak frequency). The resolution, based on the TTF, improved with increased ADMIRE strength by an average of 15% in the TTF 50% frequency for ADMIRE-5. The detectability index increased with increasing dose and ADMIRE strength by an average of 55%, 90%, and 163% for ADMIRE 3, 4, and 5, respectively. Assessing the impact of mA modulation for a fixed average dose over the length of the phantom, detectability was up to 49% lower in smaller phantom sections and up to 26% higher in larger phantom sections for the modulated scan compared to a fixed tube current scan. Overall, the detectability exhibited less variability with phantom size for modulated scans compared to fixed tube current scans. Conclusions: Image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose. The use of tube current modulation resulted in more consistent image quality with changing phantom size.« less
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Content-based quality evaluation of color images: overview and proposals
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine
2003-12-01
The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.
Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S
2016-01-01
High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
MTF evaluation of in-line phase contrast imaging system
NASA Astrophysics Data System (ADS)
Sun, Xiaoran; Gao, Feng; Zhao, Huijuan; Zhang, Limin; Li, Jiao; Zhou, Zhongxing
2017-02-01
X-ray phase contrast imaging (XPCI) is a novel method that exploits the phase shift for the incident X-ray to form an image. Various XPCI methods have been proposed, among which, in-line phase contrast imaging (IL-PCI) is regarded as one of the most promising clinical methods. The contrast of the interface is enhanced due to the introduction of the boundary fringes in XPCI, thus it is generally used to evaluate the image quality of XPCI. But the contrast is a comprehensive index and it does not reflect the information of image quality in the frequency range. The modulation transfer function (MTF), which is the Fourier transform of the system point spread function, is recognized as the metric to characterize the spatial response of conventional X-ray imaging system. In this work, MTF is introduced into the image quality evaluation of the IL-PCI system. Numerous simulations based on Fresnel - Kirchhoff diffraction theory are performed with varying system settings and the corresponding MTFs were calculated for comparison. The results show that MTF can provide more comprehensive information of image quality comparing to contrast in IL-PCI.
Color Retinal Image Enhancement Based on Luminosity and Contrast Adjustment.
Zhou, Mei; Jin, Kai; Wang, Shaoze; Ye, Juan; Qian, Dahong
2018-03-01
Many common eye diseases and cardiovascular diseases can be diagnosed through retinal imaging. However, due to uneven illumination, image blurring, and low contrast, retinal images with poor quality are not useful for diagnosis, especially in automated image analyzing systems. Here, we propose a new image enhancement method to improve color retinal image luminosity and contrast. A luminance gain matrix, which is obtained by gamma correction of the value channel in the HSV (hue, saturation, and value) color space, is used to enhance the R, G, and B (red, green and blue) channels, respectively. Contrast is then enhanced in the luminosity channel of L * a * b * color space by CLAHE (contrast-limited adaptive histogram equalization). Image enhancement by the proposed method is compared to other methods by evaluating quality scores of the enhanced images. The performance of the method is mainly validated on a dataset of 961 poor-quality retinal images. Quality assessment (range 0-1) of image enhancement of this poor dataset indicated that our method improved color retinal image quality from an average of 0.0404 (standard deviation 0.0291) up to an average of 0.4565 (standard deviation 0.1000). The proposed method is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness. This method of color retinal image enhancement may be employed to assist ophthalmologists in more efficient screening of retinal diseases and in development of improved automated image analysis for clinical diagnosis.
The Hyperspectral Imager for the Coastal Ocean (HICO) offers the coastal environmental monitoring community an unprecedented opportunity to observe changes in coastal and estuarine water quality across a range of spatial scales not feasible with traditional field-based monitoring...
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
NASA Astrophysics Data System (ADS)
Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu
2018-03-01
Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.
3D conditional generative adversarial networks for high-quality PET image estimation at low dose.
Wang, Yan; Yu, Biting; Wang, Lei; Zu, Chen; Lalush, David S; Lin, Weili; Wu, Xi; Zhou, Jiliu; Shen, Dinggang; Zhou, Luping
2018-07-01
Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dahl, Jeremy J.; Pinton, Gianmarco F.; Lediju, Muyinatu; Trahey, Gregg E.
2011-03-01
In the last 20 years, the number of suboptimal and inadequate ultrasound exams has increased. This trend has been linked to the increasing population of overweight and obese individuals. The primary causes of image degradation in these individuals are often attributed to phase aberration and clutter. Phase aberration degrades image quality by distorting the transmitted and received pressure waves, while clutter degrades image quality by introducing incoherent acoustical interference into the received pressure wavefront. Although significant research efforts have pursued the correction of image degradation due to phase aberration, few efforts have characterized or corrected image degradation due to clutter. We have developed a novel imaging technique that is capable of differentiating ultrasonic signals corrupted by acoustical interference. The technique, named short-lag spatial coherence (SLSC) imaging, is based on the spatial coherence of the received ultrasonic wavefront at small spatial distances across the transducer aperture. We demonstrate comparative B-mode and SLSC images using full-wave simulations that include the effects of clutter and show that SLSC imaging generates contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) that are significantly better than B-mode imaging under noise-free conditions. In the presence of noise, SLSC imaging significantly outperforms conventional B-mode imaging in all image quality metrics. We demonstrate the use of SLSC imaging in vivo and compare B-mode and SLSC images of human thyroid and liver.
NASA Astrophysics Data System (ADS)
Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Pfefer, T. Joshua
2017-03-01
As Photoacoustic Tomography (PAT) matures and undergoes clinical translation, objective performance test methods are needed to facilitate device development, regulatory clearance and clinical quality assurance. For mature medical imaging modalities such as CT, MRI, and ultrasound, tissue-mimicking phantoms are frequently incorporated into consensus standards for performance testing. A well-validated set of phantom-based test methods is needed for evaluating performance characteristics of PAT systems. To this end, we have constructed phantoms using a custom tissue-mimicking material based on PVC plastisol with tunable, biologically-relevant optical and acoustic properties. Each phantom is designed to enable quantitative assessment of one or more image quality characteristics including 3D spatial resolution, spatial measurement accuracy, ultrasound/PAT co-registration, uniformity, penetration depth, geometric distortion, sensitivity, and linearity. Phantoms contained targets including high-intensity point source targets and dye-filled tubes. This suite of phantoms was used to measure the dependence of performance of a custom PAT system (equipped with four interchangeable linear array transducers of varying design) on design parameters (e.g., center frequency, bandwidth, element geometry). Phantoms also allowed comparison of image artifacts, including surface-generated clutter and bandlimited sensing artifacts. Results showed that transducer design parameters create strong variations in performance including a trade-off between resolution and penetration depth, which could be quantified with our method. This study demonstrates the utility of phantom-based image quality testing in device performance assessment, which may guide development of consensus standards for PAT systems.
Cartographic quality of ERTS-1 images
NASA Technical Reports Server (NTRS)
Welch, R. I.
1973-01-01
Analyses of simulated and operational ERTS images have provided initial estimates of resolution, ground resolution, detectability thresholds and other measures of image quality of interest to earth scientists and cartographers. Based on these values, including an approximate ground resolution of 250 meters for both RBV and MSS systems, the ERTS-1 images appear suited to the production and/or revision of planimetric and photo maps of 1:500,000 scale and smaller for which map accuracy standards are compatible with the imaged detail. Thematic mapping, although less constrained by map accuracy standards, will be influenced by measurement thresholds and errors which have yet to be accurately determined for ERTS images. This study also indicates the desirability of establishing a quantitative relationship between image quality values and map products which will permit both engineers and cartographers/earth scientists to contribute to the design requirements of future satellite imaging systems.
NASA Astrophysics Data System (ADS)
Cesmeli, Erdogan; Berry, Joel L.; Carr, J. J.
2005-04-01
Proliferation of coronary stent deployment for treatment of coronary heart disease (CHD) creates a need for imaging-based follow-up examinations to assess patency. Technological improvements in multi-detector computer tomography (MDCT) make it a potential non-invasive alternative to coronary catheterization for evaluation of stent patency; however, image quality with MDCT varies based on the size and composition of the stent. We studied the role of tube focal spot size and power in the optimization of image quality in a stationary phantom. A standard uniform physical phantom with a tubular insert was used where coronary stents (4 mm in diameter) were deployed in a tube filled with contrast to simulate a typical imaging condition observed in clinical practice. We utilized different commercially available stents and scanned them with different tube voltage and current settings (LightSpeed Pro16, GE Healthcare Technologies, Waukesha, WI, USA). The scanner used different focal spot size depending on the power load and thus allowed us to assess the combined effect of the focal spot size and the power. A radiologist evaluated the resulting images in terms of image quality and artifacts. For all stents, we found that the small focal spot size yielded better image quality and reduced artifacts. In general, higher power capability for the given focal spot size improved the signal-to-noise ratio in the images allowing improved assessment. Our preliminary study in a non-moving phantom suggests that a CT scanner that can deliver the same power on a small focal spot size is better suited to have an optimized scan protocol for reliable stent assessment.
NASA Astrophysics Data System (ADS)
Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung
2016-09-01
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on our results, we recommend using this technique for high image quality.
Webster, G J; Kilgallon, J E; Ho, K F; Rowbottom, C G; Slevin, N J; Mackay, R I
2009-06-01
Uncertainty and inconsistency are observed in target volume delineation in the head and neck for radiotherapy treatment planning based only on CT imaging. Alternative modalities such as MRI have previously been incorporated into the delineation process to provide additional anatomical information. This work aims to improve on previous studies by combining good image quality with precise patient immobilisation in order to maintain patient position between scans. MR images were acquired using quadrature coils placed over the head and neck while the patient was immobilised in the treatment position using a five-point thermoplastic shell. The MR image and CT images were automatically fused in the Pinnacle treatment planning system using Syntegra software. Image quality, distortion and accuracy of the image registration using patient anatomy were evaluated. Image quality was found to be superior to that acquired using the body coil, while distortion was < 1.0 mm to a radius of 8.7 cm from the scan centre. Image registration accuracy was found to be 2.2 mm (+/- 0.9 mm) and < 3.0 degrees (n = 6). A novel MRI technique that combines good image quality with patient immobilization has been developed and is now in clinical use. The scan duration of approximately 15 min has been well tolerated by all patients.
Perez-Ponce, Hector; Daul, Christian; Wolf, Didier; Noel, Alain
2013-08-01
In mammography, image quality assessment has to be directly related to breast cancer indicator (e.g. microcalcifications) detectability. Recently, we proposed an X-ray source/digital detector (XRS/DD) model leading to such an assessment. This model simulates very realistic contrast-detail phantom (CDMAM) images leading to gold disc (representing microcalcifications) detectability thresholds that are very close to those of real images taken under the simulated acquisition conditions. The detection step was performed with a mathematical observer. The aim of this contribution is to include human observers into the disc detection process in real and virtual images to validate the simulation framework based on the XRS/DD model. Mathematical criteria (contrast-detail curves, image quality factor, etc.) are used to assess and to compare, from the statistical point of view, the cancer indicator detectability in real and virtual images. The quantitative results given in this paper show that the images simulated by the XRS/DD model are useful for image quality assessment in the case of all studied exposure conditions using either human or automated scoring. Also, this paper confirms that with the XRS/DD model the image quality assessment can be automated and the whole time of the procedure can be drastically reduced. Compared to standard quality assessment methods, the number of images to be acquired is divided by a factor of eight. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Hauser, Nik; Wang, Zhentian; Kubik-Huch, Rahel A; Trippel, Mafalda; Singer, Gad; Hohl, Michael K; Roessl, Ewald; Köhler, Thomas; van Stevendaal, Udo; Wieberneit, Nataly; Stampanoni, Marco
2014-03-01
Differential phase contrast and scattering-based x-ray mammography has the potential to provide additional and complementary clinically relevant information compared with absorption-based mammography. The purpose of our study was to provide a first statistical evaluation of the imaging capabilities of the new technique compared with digital absorption mammography. We investigated non-fixed mastectomy samples of 33 patients with invasive breast cancer, using grating-based differential phase contrast mammography (mammoDPC) with a conventional, low-brilliance x-ray tube. We simultaneously recorded absorption, differential phase contrast, and small-angle scattering signals that were combined into novel high-frequency-enhanced images with a dedicated image fusion algorithm. Six international, expert breast radiologists evaluated clinical digital and experimental mammograms in a 2-part blinded, prospective independent reader study. The results were statistically analyzed in terms of image quality and clinical relevance. The results of the comparison of mammoDPC with clinical digital mammography revealed the general quality of the images to be significantly superior (P < 0.001); sharpness, lesion delineation, as well as the general visibility of calcifications to be significantly more assessable (P < 0.001); and delineation of anatomic components of the specimens (surface structures) to be significantly sharper (P < 0.001). Spiculations were significantly better identified, and the overall clinically relevant information provided by mammoDPC was judged to be superior (P < 0.001). Our results demonstrate that complementary information provided by phase and scattering enhanced mammograms obtained with the mammoDPC approach deliver images of generally superior quality. This technique has the potential to improve radiological breast diagnostics.
Kang, Xu; Liu, Liang; Ma, Huadong
2017-01-01
Monitoring the status of urban environments, which provides fundamental information for a city, yields crucial insights into various fields of urban research. Recently, with the popularity of smartphones and vehicles equipped with onboard sensors, a people-centric scheme, namely “crowdsensing”, for city-scale environment monitoring is emerging. This paper proposes a data correlation based crowdsensing approach for fine-grained urban environment monitoring. To demonstrate urban status, we generate sensing images via crowdsensing network, and then enhance the quality of sensing images via data correlation. Specifically, to achieve a higher quality of sensing images, we not only utilize temporal correlation of mobile sensing nodes but also fuse the sensory data with correlated environment data by introducing a collective tensor decomposition approach. Finally, we conduct a series of numerical simulations and a real dataset based case study. The results validate that our approach outperforms the traditional spatial interpolation-based method. PMID:28054968
Qu, Yufu; Zou, Zhaofan
2017-10-16
Photographic images taken in foggy or hazy weather (hazy images) exhibit poor visibility and detail because of scattering and attenuation of light caused by suspended particles, and therefore, image dehazing has attracted considerable research attention. The current polarization-based dehazing algorithms strongly rely on the presence of a "sky area", and thus, the selection of model parameters is susceptible to external interference of high-brightness objects and strong light sources. In addition, the noise of the restored image is large. In order to solve these problems, we propose a polarization-based dehazing algorithm that does not rely on the sky area ("non-sky"). First, a linear polarizer is used to collect three polarized images. The maximum- and minimum-intensity images are then obtained by calculation, assuming the polarization of light emanating from objects is negligible in most scenarios involving non-specular objects. Subsequently, the polarization difference of the two images is used to determine a sky area and calculate the infinite atmospheric light value. Next, using the global features of the image, and based on the assumption that the airlight and object radiance are irrelevant, the degree of polarization of the airlight (DPA) is calculated by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is obtained by setting the right-hand side of the equation to zero. Then, the hazy image is subjected to dehazing. Subsequently, a filtering denoising algorithm, which combines the polarization difference information and block-matching and 3D (BM3D) filtering, is designed to filter the image smoothly. Our experimental results show that the proposed polarization-based dehazing algorithm does not depend on whether the image includes a sky area and does not require complex models. Moreover, the dehazing image except specular object scenarios is superior to those obtained by Tarel, Fattal, Ren, and Berman based on the criteria of no-reference quality assessment (NRQA), blind/referenceless image spatial quality evaluator (BRISQUE), blind anistropic quality index (AQI), and e.
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
Real-time Internet connections: implications for surgical decision making in laparoscopy.
Broderick, T J; Harnett, B M; Doarn, C R; Rodas, E B; Merrell, R C
2001-08-01
To determine whether a low-bandwidth Internet connection can provide adequate image quality to support remote real-time surgical consultation. Telemedicine has been used to support care at a distance through the use of expensive equipment and broadband communication links. In the past, the operating room has been an isolated environment that has been relatively inaccessible for real-time consultation. Recent technological advances have permitted videoconferencing over low-bandwidth, inexpensive Internet connections. If these connections are shown to provide adequate video quality for surgical applications, low-bandwidth telemedicine will open the operating room environment to remote real-time surgical consultation. Surgeons performing a laparoscopic cholecystectomy in Ecuador or the Dominican Republic shared real-time laparoscopic images with a panel of surgeons at the parent university through a dial-up Internet account. The connection permitted video and audio teleconferencing to support real-time consultation as well as the transmission of real-time images and store-and-forward images for observation by the consultant panel. A total of six live consultations were analyzed. In addition, paired local and remote images were "grabbed" from the video feed during these laparoscopic cholecystectomies. Nine of these paired images were then placed into a Web-based tool designed to evaluate the effect of transmission on image quality. The authors showed for the first time the ability to identify critical anatomic structures in laparoscopy over a low-bandwidth connection via the Internet. The consultant panel of surgeons correctly remotely identified biliary and arterial anatomy during six laparoscopic cholecystectomies. Within the Web-based questionnaire, 15 surgeons could not blindly distinguish the quality of local and remote laparoscopic images. Low-bandwidth, Internet-based telemedicine is inexpensive, effective, and almost ubiquitous. Use of these inexpensive, portable technologies will allow sharing of surgical procedures and decisions regardless of location. Internet telemedicine consistently supported real-time intraoperative consultation in laparoscopic surgery. The implications are broad with respect to quality improvement and diffusion of knowledge as well as for basic consultation.
Colometer: a real-time quality feedback system for screening colonoscopy.
Filip, Dobromir; Gao, Xuexin; Angulo-Rodríguez, Leticia; Mintchev, Martin P; Devlin, Shane M; Rostom, Alaa; Rosen, Wayne; Andrews, Christopher N
2012-08-28
To investigate the performance of a new software-based colonoscopy quality assessment system. The software-based system employs a novel image processing algorithm which detects the levels of image clarity, withdrawal velocity, and level of the bowel preparation in a real-time fashion from live video signal. Threshold levels of image blurriness and the withdrawal velocity below which the visualization could be considered adequate have initially been determined arbitrarily by review of sample colonoscopy videos by two experienced endoscopists. Subsequently, an overall colonoscopy quality rating was computed based on the percentage of the withdrawal time with adequate visualization (scored 1-5; 1, when the percentage was 1%-20%; 2, when the percentage was 21%-40%, etc.). In order to test the proposed velocity and blurriness thresholds, screening colonoscopy withdrawal videos from a specialized ambulatory colon cancer screening center were collected, automatically processed and rated. Quality ratings on the withdrawal were compared to the insertion in the same patients. Then, 3 experienced endoscopists reviewed the collected videos in a blinded fashion and rated the overall quality of each withdrawal (scored 1-5; 1, poor; 3, average; 5, excellent) based on 3 major aspects: image quality, colon preparation, and withdrawal velocity. The automated quality ratings were compared to the averaged endoscopist quality ratings using Spearman correlation coefficient. Fourteen screening colonoscopies were assessed. Adenomatous polyps were detected in 4/14 (29%) of the collected colonoscopy video samples. As a proof of concept, the Colometer software rated colonoscope withdrawal as having better visualization than the insertion in the 10 videos which did not have any polyps (average percent time with adequate visualization: 79% ± 5% for withdrawal and 50% ± 14% for insertion, P < 0.01). Withdrawal times during which no polyps were removed ranged from 4-12 min. The median quality rating from the automated system and the reviewers was 3.45 [interquartile range (IQR), 3.1-3.68] and 3.00 (IQR, 2.33-3.67) respectively for all colonoscopy video samples. The automated rating revealed a strong correlation with the reviewer's rating (ρ coefficient= 0.65, P = 0.01). There was good correlation of the automated overall quality rating and the mean endoscopist withdrawal speed rating (Spearman r coefficient= 0.59, P = 0.03). There was no correlation of automated overall quality rating with mean endoscopists image quality rating (Spearman r coefficient= 0.41, P = 0.15). The results from a novel automated real-time colonoscopy quality feedback system strongly agreed with the endoscopists' quality assessments. Further study is required to validate this approach.
Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo
2015-12-01
Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes. Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64 ± 6.5%, 3.63 ± 0.83%, 1.31% ± 0.09%, 0.86% ± 0.11% and 0.52 % ± 0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms. The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.
Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.
Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo
2015-12-07
Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64 ± 6.5%, 3.63 ± 0.83%, 1.31% ± 0.09%, 0.86% ± 0.11% and 0.52 % ± 0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
Sun, Jihang; Yu, Tong; Liu, Jinrong; Duan, Xiaomin; Hu, Di; Liu, Yong; Peng, Yun
2017-03-16
Model-based iterative reconstruction (MBIR) is a promising reconstruction method which could improve CT image quality with low radiation dose. The purpose of this study was to demonstrate the advantage of using MBIR for noise reduction and image quality improvement in low dose chest CT for children with necrotizing pneumonia, over the adaptive statistical iterative reconstruction (ASIR) and conventional filtered back-projection (FBP) technique. Twenty-six children with necrotizing pneumonia (aged 2 months to 11 years) who underwent standard of care low dose CT scans were included. Thinner-slice (0.625 mm) images were retrospectively reconstructed using MBIR, ASIR and conventional FBP techniques. Image noise and signal-to-noise ratio (SNR) for these thin-slice images were measured and statistically analyzed using ANOVA. Two radiologists independently analyzed the image quality for detecting necrotic lesions, and results were compared using a Friedman's test. Radiation dose for the overall patient population was 0.59 mSv. There was a significant improvement in the high-density and low-contrast resolution of the MBIR reconstruction resulting in more detection and better identification of necrotic lesions (38 lesions in 0.625 mm MBIR images vs. 29 lesions in 0.625 mm FBP images). The subjective display scores (mean ± standard deviation) for the detection of necrotic lesions were 5.0 ± 0.0, 2.8 ± 0.4 and 2.5 ± 0.5 with MBIR, ASIR and FBP reconstruction, respectively, and the respective objective image noise was 13.9 ± 4.0HU, 24.9 ± 6.6HU and 33.8 ± 8.7HU. The image noise decreased by 58.9 and 26.3% in MBIR images as compared to FBP and ASIR images. Additionally, the SNR of MBIR images was significantly higher than FBP images and ASIR images. The quality of chest CT images obtained by MBIR in children with necrotizing pneumonia was significantly improved by the MBIR technique as compared to the ASIR and FBP reconstruction, to provide a more confident and accurate diagnosis for necrotizing pneumonia.
Toward objective image quality metrics: the AIC Eval Program of the JPEG
NASA Astrophysics Data System (ADS)
Richter, Thomas; Larabi, Chaker
2008-08-01
Objective quality assessment of lossy image compression codecs is an important part of the recent call of the JPEG for Advanced Image Coding. The target of the AIC ad-hoc group is twofold: First, to receive state-of-the-art still image codecs and to propose suitable technology for standardization; and second, to study objective image quality metrics to evaluate the performance of such codes. Even tthough the performance of an objective metric is defined by how well it predicts the outcome of a subjective assessment, one can also study the usefulness of a metric in a non-traditional way indirectly, namely by measuring the subjective quality improvement of a codec that has been optimized for a specific objective metric. This approach shall be demonstrated here on the recently proposed HDPhoto format14 introduced by Microsoft and a SSIM-tuned17 version of it by one of the authors. We compare these two implementations with JPEG1 in two variations and a visual and PSNR optimal JPEG200013 implementation. To this end, we use subjective and objective tests based on the multiscale SSIM and a new DCT based metric.
Depth image enhancement using perceptual texture priors
NASA Astrophysics Data System (ADS)
Bang, Duhyeon; Shim, Hyunjung
2015-03-01
A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.
Brodin, N. Patrik; Guha, Chandan; Tomé, Wolfgang A.
2015-01-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first six months experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (± 3 %) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis. PMID:26425981
Brodin, N Patrik; Guha, Chandan; Tomé, Wolfgang A
2015-11-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first 6-mo experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (±3%) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis.
NASA Astrophysics Data System (ADS)
Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan
2015-06-01
Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.
Comparing image quality of print-on-demand books and photobooks from web-based vendors
NASA Astrophysics Data System (ADS)
Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell
2010-01-01
Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.
Validation of no-reference image quality index for the assessment of digital mammographic images
NASA Astrophysics Data System (ADS)
de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.
2016-03-01
To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity inter-observers in the reporting of image quality assessment.
Focus measure method based on the modulus of the gradient of the color planes for digital microscopy
NASA Astrophysics Data System (ADS)
Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel
2018-02-01
The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.
Bernatowicz, K; Keall, P; Mishra, P; Knopf, A; Lomax, A; Kipritidis, J
2015-01-01
Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) "conventional" 4D CT that uses a constant imaging and couch-shift frequency, (ii) "beam paused" 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) "respiratory-gated" 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm(3) spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10(-19)). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%-1.4%), false positives (4.0%-2.6%), and false negatives (2.7%-1.3%). These percentage reductions correspond to gating reducing image artifacts by 24-90 cm(3) of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm(3) of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.
Hojjati, Mojgan; Van Hedent, Steven; Rassouli, Negin; Tatsuoka, Curtis; Jordan, David; Dhanantwari, Amar; Rajiah, Prabhakar
2017-11-01
To evaluate the image quality of routine diagnostic images generated from a novel detector-based spectral detector CT (SDCT) and compare it with CT images obtained from a conventional scanner with an energy-integrating detector (Brilliance iCT), Routine diagnostic (conventional/polyenergetic) images are non-material-specific images that resemble single-energy images obtained at the same radiation, METHODS: ACR guideline-based phantom evaluations were performed on both SDCT and iCT for CT adult body protocol. Retrospective analysis was performed on 50 abdominal CT scans from each scanner. Identical ROIs were placed at multiple locations in the abdomen and attenuation, noise, SNR, and CNR were measured. Subjective image quality analysis on a 5-point Likert scale was performed by 2 readers for enhancement, noise, and image quality. On phantom studies, SDCT images met the ACR requirements for CT number and deviation, CNR and effective radiation dose. In patients, the qualitative scores were significantly higher for the SDCT than the iCT, including enhancement (4.79 ± 0.38 vs. 4.60 ± 0.51, p = 0.005), noise (4.63 ± 0.42 vs. 4.29 ± 0.50, p = 0.000), and quality (4.85 ± 0.32, vs. 4.57 ± 0.50, p = 0.000). The SNR was higher in SDCT than iCT for liver (7.4 ± 4.2 vs. 7.2 ± 5.3, p = 0.662), spleen (8.6 ± 4.1 vs. 7.4 ± 3.5, p = 0.152), kidney (11.1 ± 6.3 vs. 8.7 ± 5.0, p = 0.033), pancreas (6.90 ± 3.45 vs 6.11 ± 2.64, p = 0.303), aorta (14.2 ± 6.2 vs. 11.0 ± 4.9, p = 0.007), but was slightly lower in lumbar-vertebra (7.7 ± 4.2 vs. 7.8 ± 4.5, p = 0.937). The CNR of the SDCT was also higher than iCT for all abdominal organs. Image quality of routine diagnostic images from the SDCT is comparable to images of a conventional CT scanner with energy-integrating detectors, making it suitable for diagnostic purposes.
Quality control management and communication between radiologists and technologists.
Nagy, Paul G; Pierce, Benjamin; Otto, Misty; Safdar, Nabile M
2008-06-01
The greatest barrier to quality control (QC) in the digital imaging environment is the lack of communication and documentation between those who interpret images and those who acquire them. Paper-based QC methods are insufficient in a digital image management system. Problem work flow must be incorporated into reengineering efforts when migrating to a digital practice. The authors implemented a Web-based QC feedback tool to document and facilitate the communication of issues identified by radiologists. The goal was to promote a responsive and constructive tool that contributes to a culture of quality. The hypothesis was that by making it easier for radiologists to submit quality issues, the number of QC issues submitted would increase. The authors integrated their Web-based quality tracking system with a clinical picture archiving and communication system so that radiologists could report quality issues without disrupting clinical work flow. Graphical dashboarding techniques aid supervisors in using this database to identify the root causes of different types of issues. Over the initial 12-month rollout period, starting in the general section, the authors recorded 20 times more QC issues submitted by radiologists, accompanied by a rise in technologists' responsiveness to QC issues. For technologists with high numbers of QC issues, the incorporation of data from this tracking system proved useful in performance appraisals and in driving individual improvement. This tool is an example of the types of information technology innovations that can be leveraged to support QC in the digital imaging environment. Initial data suggest that the result is not only an improvement in quality but higher levels of satisfaction for both radiologists and technologists.
Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available. PMID:23112602
Multi-sensor fusion of infrared and electro-optic signals for high resolution night images.
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.
Clinical decision making using teleradiology in urology.
Lee, B R; Allaf, M; Moore, R; Bohlman, M; Wang, G M; Bishoff, J T; Jackman, S V; Cadeddu, J A; Jarrett, T W; Khazan, R; Kavoussi, L R
1999-01-01
Using a personal computer-based teleradiology system, we compared accuracy, confidence, and diagnostic ability in the interpretation of digitized radiographs to determine if teleradiology-imported studies convey sufficient information to make relevant clinical decisions involving urology. Variables of diagnostic accuracy, confidence, image quality, interpretation, and the impact of clinical decisions made after viewing digitized radiographs were compared with those of original radiographs. We evaluated 956 radiographs that included 94 IV pyelograms, four voiding cystourethrograms, and two nephrostograms. The radiographs were digitized and transferred over an Ethernet network to a remote personal computer-based viewing station. The digitized images were viewed by urologists and graded according to confidence in making a diagnosis, image quality, diagnostic difficulty, clinical management based on the image itself, and brief patient history. The hard-copy radiographs were then interpreted immediately afterward, and diagnostic decisions were reassessed. All analog radiographs were reviewed by an attending radiologist. Ninety-seven percent of the decisions made from the digitized radiographs did not change after reviewing conventional radiographs of the same case. When comparing the variables of clinical confidence, quality of the film on the teleradiology system versus analog films, and diagnostic difficulty, we found no statistical difference (p > .05) between the two techniques. Overall accuracy in interpreting the digitized images on the teleradiology system was 88% by urologists compared with that of the attending radiologist's interpretation of the analog radiographs. However, urologists detected findings on five (5%) analog radiographs that had been previously unreported by the radiologist. Viewing radiographs transmitted to a personal computer-based viewing station is an appropriate means of reviewing films with sufficient quality on which to base clinical decisions. Our focus was whether decisions made after viewing the transmitted radiographs would change after viewing the hard-copy images of the same case. In 97% of the cases, the decision did not change. In those cases in which management was altered, recommendation of further imaging studies was the most common factor.
Chen, Qing; Xu, Pengfei; Liu, Wenzhong
2016-01-01
Computer vision as a fast, low-cost, noncontact, and online monitoring technology has been an important tool to inspect product quality, particularly on a large-scale assembly production line. However, the current industrial vision system is far from satisfactory in the intelligent perception of complex grain images, comprising a large number of local homogeneous fragmentations or patches without distinct foreground and background. We attempt to solve this problem based on the statistical modeling of spatial structures of grain images. We present a physical explanation in advance to indicate that the spatial structures of the complex grain images are subject to a representative Weibull distribution according to the theory of sequential fragmentation, which is well known in the continued comminution of ore grinding. To delineate the spatial structure of the grain image, we present a method of multiscale and omnidirectional Gaussian derivative filtering. Then, a product quality classifier based on sparse multikernel–least squares support vector machine is proposed to solve the low-confidence classification problem of imbalanced data distribution. The proposed method is applied on the assembly line of a food-processing enterprise to classify (or identify) automatically the production quality of rice. The experiments on the real application case, compared with the commonly used methods, illustrate the validity of our method. PMID:26986726
Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward
2016-01-01
Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592
NASA Astrophysics Data System (ADS)
Heisler, Morgan; Lee, Sieun; Mammo, Zaid; Jian, Yifan; Ju, Myeong Jin; Miao, Dongkai; Raposo, Eric; Wahl, Daniel J.; Merkur, Andrew; Navajas, Eduardo; Balaratnasingam, Chandrakumar; Beg, Mirza Faisal; Sarunic, Marinko V.
2017-02-01
High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed. In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.
Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una
2018-02-01
Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.
Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In
2016-01-01
Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542
Photodiode area effect on performance of X-ray CMOS active pixel sensors
NASA Astrophysics Data System (ADS)
Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.
2018-02-01
Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
Cnn Based Retinal Image Upscaling Using Zero Component Analysis
NASA Astrophysics Data System (ADS)
Nasonov, A.; Chesnakov, K.; Krylov, A.
2017-05-01
The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.
Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario
To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.
Study of Image Qualities From 6D Robot-Based CBCT Imaging System of Small Animal Irradiator.
Sharma, Sunil; Narayanasamy, Ganesh; Clarkson, Richard; Chao, Ming; Moros, Eduardo G; Zhang, Xin; Yan, Yulong; Boerma, Marjan; Paudel, Nava; Morrill, Steven; Corry, Peter; Griffin, Robert J
2017-01-01
To assess the quality of cone beam computed tomography images obtained by a robotic arm-based and image-guided small animal conformal radiation therapy device. The small animal conformal radiation therapy device is equipped with a 40 to 225 kV X-ray tube mounted on a custom made gantry, a 1024 × 1024 pixels flat panel detector (200 μm resolution), a programmable 6 degrees of freedom robot for cone beam computed tomography imaging and conformal delivery of radiation doses. A series of 2-dimensional radiographic projection images were recorded in cone beam mode by placing and rotating microcomputed tomography phantoms on the "palm' of the robotic arm. Reconstructed images were studied for image quality (spatial resolution, image uniformity, computed tomography number linearity, voxel noise, and artifacts). Geometric accuracy was measured to be 2% corresponding to 0.7 mm accuracy on a Shelley microcomputed tomography QA phantom. Qualitative resolution of reconstructed axial computed tomography slices using the resolution coils was within 200 μm. Quantitative spatial resolution was found to be 3.16 lp/mm. Uniformity of the system was measured within 34 Hounsfield unit on a QRM microcomputed tomography water phantom. Computed tomography numbers measured using the linearity plate were linear with material density ( R 2 > 0.995). Cone beam computed tomography images of the QRM multidisk phantom had minimal artifacts. Results showed that the small animal conformal radiation therapy device is capable of producing high-quality cone beam computed tomography images for precise and conformal small animal dose delivery. With its high-caliber imaging capabilities, the small animal conformal radiation therapy device is a powerful tool for small animal research.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
Quantitative metrics for assessment of chemical image quality and spatial resolution
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
2016-02-28
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
Saha, Sajib Kumar; Fernando, Basura; Cuadros, Jorge; Xiao, Di; Kanagasingam, Yogesan
2018-04-27
Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine 'accept' and 'reject' categories. 'Reject' category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into 'accept' and 'reject' classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise 'accept' and 'reject' images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.
Quantitative metrics for assessment of chemical image quality and spatial resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One
2016-03-01
CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Evaluation of image quality in terahertz pulsed imaging using test objects.
Fitzgerald, A J; Berry, E; Miles, R E; Zinovev, N N; Smith, M A; Chamberlain, J M
2002-11-07
As with other imaging modalities, the performance of terahertz (THz) imaging systems is limited by factors of spatial resolution, contrast and noise. The purpose of this paper is to introduce test objects and image analysis methods to evaluate and compare THz image quality in a quantitative and objective way, so that alternative terahertz imaging system configurations and acquisition techniques can be compared, and the range of image parameters can be assessed. Two test objects were designed and manufactured, one to determine the modulation transfer functions (MTF) and the other to derive image signal to noise ratio (SNR) at a range of contrasts. As expected the higher THz frequencies had larger MTFs, and better spatial resolution as determined by the spatial frequency at which the MTF dropped below the 20% threshold. Image SNR was compared for time domain and frequency domain image parameters and time delay based images consistently demonstrated higher SNR than intensity based parameters such as relative transmittance because the latter are more strongly affected by the sources of noise in the THz system such as laser fluctuations and detector shot noise.
Morchel, Herman; Ogedegbe, Chinwe; Chaplin, William; Cheney, Brianna; Zakharchenko, Svetlana; Misch, David; Schwartz, Matthew; Feldman, Joseph; Kaul, Sanjeev
2018-03-01
To determine if physicians trained in ultrasound interpretation perceive a difference in image quality and usefulness between Extended Focused Assessment with Sonography ultrasound examinations performed at bedside in a hospital vs. by emergency medical technicians minimally trained in medical ultrasound on a moving ambulance and transmitted to the hospital via a novel wireless system. In particular, we sought to demonstrate that useful images could be obtained from patients in less than optimal imaging conditions; that is, while they were in transport. Emergency medical technicians performed the examinations during transport of blunt trauma patients. Upon patient arrival at the hospital, a bedside Extended Focused Assessment with Sonography examination was performed by a physician. Both examinations were recorded and later reviewed by physicians trained in ultrasound interpretation. Data were collected on 20 blunt trauma patients over a period of 13 mo. Twenty ultrasound-trained physicians blindly compared transmitted vs. bedside images using 11 Questionnaire for User Interaction Satisfaction scales. Four paired samples t-tests were conducted to assess mean differences between ratings for ambulatory and base images. Although there is a slight tendency for the average rating across all subjects and raters to be slightly higher in the base than in the ambulatory condition, none of these differences are statistically significant. These results suggest that the quality of the ambulatory images was viewed as essentially as good as the quality of the base images.
Softcopy quality ruler method: implementation and validation
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying
2009-01-01
A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.
Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E
2011-01-01
The automated detection of diabetic retinopathy and other eye diseases in images of the retina has great promise as a low-cost method for broad-based screening. Many systems in the literature which perform automated detection include a quality estimation step and physiological feature detection, including the vascular tree and the optic nerve / macula location. In this work, we study the robustness of an automated disease detection method with respect to the accuracy of the optic nerve location and the quality of the images obtained as judged by a quality estimation algorithm. The detection algorithm features microaneurysm and exudate detection followed by feature extraction on the detected population to describe the overall retina image. Labeled images of retinas ground-truthed to disease states are used to train a supervised learning algorithm to identify the disease state of the retina image and exam set. Under the restrictions of high confidence optic nerve detections and good quality imagery, the system achieves a sensitivity and specificity of 94.8% and 78.7% with area-under-curve of 95.3%. Analysis of the effect of constraining quality and the distinction between mild non-proliferative diabetic retinopathy, normal retina images, and more severe disease states is included.
Video conference quality assessment based on cooperative sensing of video and audio
NASA Astrophysics Data System (ADS)
Wang, Junxi; Chen, Jialin; Tian, Xin; Zhou, Cheng; Zhou, Zheng; Ye, Lu
2015-12-01
This paper presents a method to video conference quality assessment, which is based on cooperative sensing of video and audio. In this method, a proposed video quality evaluation method is used to assess the video frame quality. The video frame is divided into noise image and filtered image by the bilateral filters. It is similar to the characteristic of human visual, which could also be seen as a low-pass filtering. The audio frames are evaluated by the PEAQ algorithm. The two results are integrated to evaluate the video conference quality. A video conference database is built to test the performance of the proposed method. It could be found that the objective results correlate well with MOS. Then we can conclude that the proposed method is efficiency in assessing video conference quality.
USDA-ARS?s Scientific Manuscript database
New, non-destructive sensing techniques for fast and more effective quality assessment of fruits and vegetables are needed to meet the ever-increasing consumer demand for better, more consistent and safer food products. Over the past 15 years, hyperspectral imaging has emerged as a new generation of...
Fritscher, Karl; Grunerbl, Agnes; Hanni, Markus; Suhm, Norbert; Hengg, Clemens; Schubert, Rainer
2009-10-01
Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.
[Phantom studies of ultrasound equipment for quality improvement in breast diagnosis].
Madjar, H; Mundinger, A; Lattermann, U; Gufler, H; Prömpeler, H J
1996-04-01
According to the German guidelines for quality control of ultrasonic equipment, the following conditions are required for breast ultrasound: A transducer frequency between 5-7.5 MHz and a minimum field of view of 5 cm. Satisfactory images must be obtained in a depth between 0.5 and 4 cm with a wide tolerance of the focal zones. This allows the use of poor quality equipment which does not produce satisfactory image quality and it excludes a number of high frequency and high resolution transducers with a field of view below 5 cm. This study with a test phantom was performed to define image quality objectively. Sixteen ultrasound instruments in different price categories were used to perform standardized examinations on a breast phantom model 550 (ATS Laboratories, Bridgeport, USA). Contrast and spatial resolution in different penetration depths were investigated on cyst phantoms from 1-4 mm diameter and wire targets with defined distances between 0.5-3 mm 4 investigations reported the images. A positive correlation was seen between price category and image quality. This study demonstrates that transducer frequency and image geometry do not allow sufficient quality control. An improvement of ultrasound diagnosis is only possible if equipment guidelines are based on standard examinations with test phantoms.
Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H.M.; Rogan, Peter K.
2017-01-01
Accurate digital image analysis of abnormal microscopic structures relies on high quality images and on minimizing the rates of false positive (FP) and negative objects in images. Cytogenetic biodosimetry detects dicentric chromosomes (DCs) that arise from exposure to ionizing radiation, and determines radiation dose received based on DC frequency. Improvements in automated DC recognition increase the accuracy of dose estimates by reclassifying FP DCs as monocentric chromosomes or chromosome fragments. We also present image segmentation methods to rank high quality digital metaphase images and eliminate suboptimal metaphase cells. A set of chromosome morphology segmentation methods selectively filtered out FP DCs arising primarily from sister chromatid separation, chromosome fragmentation, and cellular debris. This reduced FPs by an average of 55% and was highly specific to these abnormal structures (≥97.7%) in three samples. Additional filters selectively removed images with incomplete, highly overlapped, or missing metaphase cells, or with poor overall chromosome morphologies that increased FP rates. Image selection is optimized and FP DCs are minimized by combining multiple feature based segmentation filters and a novel image sorting procedure based on the known distribution of chromosome lengths. Applying the same image segmentation filtering procedures to both calibration and test samples reduced the average dose estimation error from 0.4 Gy to <0.2 Gy, obviating the need to first manually review these images. This reliable and scalable solution enables batch processing for multiple samples of unknown dose, and meets current requirements for triage radiation biodosimetry of high quality metaphase cell preparations. PMID:29026522
Lee, Ki Baek
2018-01-01
Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008
Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-11-01
Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.
The impact of skull bone intensity on the quality of compressed CT neuro images
NASA Astrophysics Data System (ADS)
Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw
2012-02-01
The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.
Quality evaluation of no-reference MR images using multidirectional filters and image statistics.
Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik
2018-09-01
This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Speckle reduction in echocardiography by temporal compounding and anisotropic diffusion filtering
NASA Astrophysics Data System (ADS)
Giraldo-Guzmán, Jader; Porto-Solano, Oscar; Cadena-Bonfanti, Alberto; Contreras-Ortiz, Sonia H.
2015-01-01
Echocardiography is a medical imaging technique based on ultrasound signals that is used to evaluate heart anatomy and physiology. Echocardiographic images are affected by speckle, a type of multiplicative noise that obscures details of the structures, and reduces the overall image quality. This paper shows an approach to enhance echocardiography using two processing techniques: temporal compounding and anisotropic diffusion filtering. We used twenty echocardiographic videos that include one or three cardiac cycles to test the algorithms. Two images from each cycle were aligned in space and averaged to obtain the compound images. These images were then processed using anisotropic diffusion filters to further improve their quality. Resultant images were evaluated using quality metrics and visual assessment by two medical doctors. The average total improvement on signal-to-noise ratio was up to 100.29% for videos with three cycles, and up to 32.57% for videos with one cycle.
Qualification process of CR system and quantification of digital image quality
NASA Astrophysics Data System (ADS)
Garnier, P.; Hun, L.; Klein, J.; Lemerle, C.
2013-01-01
CEA Valduc uses several X-Ray generators to carry out many inspections: void search, welding expertise, gap measurements, etc. Most of these inspections are carried out on silver based plates. For several years, the CEA/Valduc has decided to qualify new devices such as digital plates or CCD/flat panel plates. On one hand, the choice of this technological orientation is to forecast the assumed and eventual disappearance of silver based plates; on the other hand, it is also to keep our skills mastering up-to-date. The main improvement brought by numerical plates is the continuous progress of the measurement accuracy, especially with image data processing. It is now common to measure defects thickness or depth position within a part. In such applications, data image processing is used to obtain complementary information compared to scanned silver based plates. This scanning procedure is harmful for measurements which imply a data corruption of the resolution, the adding of numerical noise and is time expensive. Digital plates enable to suppress the scanning procedure and to increase resolution. It is nonetheless difficult to define, for digital images, single criteria for the image quality. A procedure has to be defined in order to estimate quality of the digital data itself; the impact of the scanning device and the configuration parameters are also to be taken into account. This presentation deals with the qualification process developed by CEA/Valduc for digital plates (DUR-NDT) based on the study of quantitative criteria chosen to define a direct numerical image quality that could be compared with scanned silver based pictures and the classical optical density. The versatility of the X-Ray parameters is also discussed (X-ray tension, intensity, time exposure). The aim is to be able to transfer the year long experience of CEA/Valduc with silver-based plates inspection to these new digital plates supports. This is an industrial stake.
Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2008-01-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
TH-B-207B-00: Pediatric Image Quality Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Jee, K; Sharp, G
Purpose: Proton radiography, which images the patients with the same type of particles that they are to be treated with, is a promising approach for image guidance and range uncertainties reduction. This study aimed to realize quality proton radiography by measuring dose rate functions (DRF) in time domain using a single flat panel and retrieve water equivalent path length (WEPL) from them. Methods: An amorphous silicon flat panel (PaxScan™ 4030CB, Varian Medical Systems, Inc., Palo Alto, CA) was placed behind phantoms to measure DRFs from a proton beam modulated by the modulator wheel. To retrieve WEPL and RSP, calibration modelsmore » based on the intensity of DRFs only, root mean square (RMS) of DRFs only and the intensity weighted RMS were tested. The quality of obtained WEPL images (in terms of spatial resolution and level of details) and the accuracy of WEPL were compared. Results: RSPs for most of the Gammex phantom inserts were retrieved within ± 1% errors by calibration models based on the RMS and intensity weighted RMS. The mean percentage error for all inserts was reduced from 1.08% to 0.75% by matching intensity in the calibration model. In specific cases such as the insert with a titanium rod, the calibration model based on RMS only fails while the that based on intensity weighted RMS is still valid. The quality of retrieved WEPL images were significantly improved for calibration models including intensity matching. Conclusion: For the first time, a flat panel, which is readily available in the beamline for image guidance, was tested to acquire quality proton radiography with WEPL accurately retrieved from it. This technique is promising to be applied for image-guided proton therapy as well as patient specific RSP determination to reduce uncertainties of beam ranges.« less
Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.
2012-01-01
Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of ≥5% in ∼1 mm2 areas and ≥2% in ∼20 mm2 areas. Conclusions: The ability to detect small dose differences (≤2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified. PMID:22894421
Fatima, A; Kulkarni, V K; Banda, N R; Agrawal, A K; Singh, B; Sarkar, P S; Tripathi, S; Shripathi, T; Kashyap, Y; Sinha, A
2016-01-01
Application of high resolution synchrotron micro-imaging in microdefects studies of restored dental samples. The purpose of this study was to identify and compare the defects in restorations done by two different resin systems on teeth samples using synchrotron based micro-imaging techniques namely Phase Contrast Imaging (PCI) and micro-computed tomography (MCT). With this aim acquired image quality was also compared with routinely used RVG (Radiovisiograph). Crowns of human teeth samples were fractured mechanically involving only enamel and dentin, without exposure of pulp chamber and were divided into two groups depending on the restorative composite materials used. Group A samples were restored using a submicron Hybrid composite material and Group B samples were restored using a Nano-Hybrid restorative composite material. Synchrotron based PCI and MCT was performed with the aim of visualization of tooth structure, composite resin and their interface. The quantitative and qualitative comparison of phase contrast and absorption contrast images along with MCT on the restored teeth samples shows comparatively large number of voids in Group A samples. Quality assessment of dental restorations using synchrotron based micro-imaging suggests Nano-Hybrid resin restorations (Group B) are better than Group A.
NASA Astrophysics Data System (ADS)
Kim, Moon Sung; Lee, Kangjin; Chao, Kaunglin; Lefcourt, Alan; Cho, Byung-Kwan; Jun, Won
We developed a push-broom, line-scan imaging system capable of simultaneous measurements of reflectance and fluorescence. The system allows multitasking inspections for quality and safety attributes of apples due to its dynamic capabilities in simultaneously capturing fluorescence and reflectance, and selectivity in multispectral bands. A multitasking image-based inspection system for online applications has been suggested in that a single imaging device that could perform a multitude of both safety and quality inspection needs. The presented multitask inspection approach in online applications may provide an economically viable means for a number of food processing industries being able to adapt to operate and meet the dynamic and specific inspection and sorting needs.
NASA Astrophysics Data System (ADS)
Zhang, Rongxiao; Jee, Kyung-Wook; Cascio, Ethan; Sharp, Gregory C.; Flanz, Jacob B.; Lu, Hsiao-Ming
2018-01-01
Proton radiography, which images patients with the same type of particles as those with which they are to be treated, is a promising approach to image guidance and water equivalent path length (WEPL) verification in proton radiation therapy. We have shown recently that proton radiographs could be obtained by measuring time-resolved dose rate functions (DRFs) using an x-ray amorphous silicon flat panel. The WEPL values were derived solely from the root-mean-square (RMS) of DRFs, while the intensity information in the DRFs was filtered out. In this work, we explored the use of such intensity information for potential improvement in WEPL accuracy and imaging quality. Three WEPL derivation methods based on, respectively, the RMS only, the intensity only, and the intensity-weighted RMS were tested and compared in terms of the quality of obtained radiograph images and the accuracy of WEPL values. A Gammex CT calibration phantom containing inserts made of various tissue substitute materials with independently measured relative stopping powers (RSP) was used to assess the imaging performances. Improved image quality with enhanced interfaces was achieved while preserving the accuracy by using intensity information in the calibration. Other objects, including an anthropomorphic head phantom, a proton therapy range compensator, a frozen lamb’s head and an ‘image quality phantom’ were also imaged. Both the RMS only and the intensity-weighted RMS methods derived RSPs within ± 1% for most of the Gammex phantom inserts, with a mean absolute percentage error of 0.66% for all inserts. In the case of the insert with a titanium rod, the method based on RMS completely failed, whereas that based on the intensity-weighted RMS was qualitatively valid. The use of intensity greatly enhanced the interfaces between different materials in the obtained WEPL images, suggesting the potential for image guidance in areas such as patient positioning and tumor tracking by proton radiography.
Computer-generated holographic near-eye display system based on LCoS phase only modulator
NASA Astrophysics Data System (ADS)
Sun, Peng; Chang, Shengqian; Zhang, Siman; Xie, Ting; Li, Huaye; Liu, Siqi; Wang, Chang; Tao, Xiao; Zheng, Zhenrong
2017-09-01
Augmented reality (AR) technology has been applied in various areas, such as large-scale manufacturing, national defense, healthcare, movie and mass media and so on. An important way to realize AR display is using computer-generated hologram (CGH), which is accompanied by low image quality and heavy computing defects. Meanwhile, the diffraction of Liquid Crystal on Silicon (LCoS) has a negative effect on image quality. In this paper, a modified algorithm based on traditional Gerchberg-Saxton (GS) algorithm was proposed to improve the image quality, and new method to establish experimental system was used to broaden field of view (FOV). In the experiment, undesired zero-order diffracted light was eliminated and high definition 2D image was acquired with FOV broadened to 36.1 degree. We have also done some pilot research in 3D reconstruction with tomography algorithm based on Fresnel diffraction. With the same experimental system, experimental results demonstrate the feasibility of 3D reconstruction. These modifications are effective and efficient, and may provide a better solution in AR realization.
Ziegler, Susanne; Jakoby, Bjoern W; Braun, Harald; Paulus, Daniel H; Quick, Harald H
2015-12-01
In integrated PET/MR hybrid imaging the evaluation of PET performance characteristics according to the NEMA standard NU 2-2007 is challenging because of incomplete MR-based attenuation correction (AC) for phantom imaging. In this study, a strategy for CT-based AC of the NEMA image quality (IQ) phantom is assessed. The method is systematically evaluated in NEMA IQ phantom measurements on an integrated PET/MR system. NEMA IQ measurements were performed on the integrated 3.0 Tesla PET/MR hybrid system (Biograph mMR, Siemens Healthcare). AC of the NEMA IQ phantom was realized by an MR-based and by a CT-based method. The suggested CT-based AC uses a template μ-map of the NEMA IQ phantom and a phantom holder for exact repositioning of the phantom on the systems patient table. The PET image quality parameters contrast recovery, background variability, and signal-to-noise ratio (SNR) were determined and compared for both phantom AC methods. Reconstruction parameters of an iterative 3D OP-OSEM reconstruction were optimized for highest lesion SNR in NEMA IQ phantom imaging. Using a CT-based NEMA IQ phantom μ-map on the PET/MR system is straightforward and allowed performing accurate NEMA IQ measurements on the hybrid system. MR-based AC was determined to be insufficient for PET quantification in the tested NEMA IQ phantom because only photon attenuation caused by the MR-visible phantom filling but not the phantom housing is considered. Using the suggested CT-based AC, the highest SNR in this phantom experiment for small lesions (<= 13 mm) was obtained with 3 iterations, 21 subsets and 4 mm Gaussian filtering. This study suggests CT-based AC for the NEMA IQ phantom when performing PET NEMA IQ measurements on an integrated PET/MR hybrid system. The superiority of CT-based AC for this phantom is demonstrated by comparison to measurements using MR-based AC. Furthermore, optimized PET image reconstruction parameters are provided for the highest lesion SNR in NEMA IQ phantom measurements.
Rosenkrantz, Andrew B; Johnson, Evan; Sanger, Joseph J
2015-10-01
This article presents our local experience in the implementation of a real-time web-based system for reporting and tracking quality issues relating to abdominal imaging examinations. This system allows radiologists to electronically submit examination quality issues during clinical readouts. The submitted information is e-mailed to a designate for the given modality for further follow-up; the designate may subsequently enter text describing their response or action taken, which is e-mailed back to the radiologist. Review of 558 entries over a 6-year period demonstrated documentation of a broad range of examination quality issues, including specific issues relating to protocol deviation, post-processing errors, positioning errors, artifacts, and IT concerns. The most common issues varied among US, CT, MRI, radiography, and fluoroscopy. In addition, the most common issues resulting in a patient recall for repeat imaging (generally related to protocol deviation in MRI and US) were identified. In addition to submitting quality problems, radiologists also commonly used the tool to provide recognition of a well-performed examination. An electronic log of actions taken in response to radiologists' submissions indicated that both positive and negative feedback were commonly communicated to the performing technologist. Information generated using the tool can be used to guide subsequent quality improvement initiatives within a practice, including continued protocol standardization as well as education of technologists in the optimization of abdominal imaging examinations.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Erb-Eigner, Katharina; Taupitz, Matthias; Asbach, Patrick
2016-01-01
The purpose of this study was to compare contrast and image quality of whole-body equilibrium-phase high-spatial-resolution MR angiography using a non-protein-binding unspecific extracellular gadolinium-based contrast medium with that of two contrast media with different protein-binding properties. 45 patients were examined using either 15 mL of gadobutrol (non-protein-binding, n = 15), 32 mL of gadobenate dimeglumine (weakly protein binding, n = 15) or 11 mL gadofosveset trisodium (protein binding, n = 15) followed by equilibrium-phase high-spatial-resolution MR-angiography of four consecutive anatomic regions. The time elapsed between the contrast injection and the beginning of the equilibrium-phase image acquisition in the respective region was measured and was up to 21 min. Signal intensity was measured in two vessels per region and in muscle tissue. Relative contrast (RC) values were calculated. Vessel contrast, artifacts and image quality were rated by two radiologists in consensus on a five-point scale. Compared with gadobutrol, gadofosveset trisodium revealed significantly higher RC values only when acquired later than 15 min after bolus injection. Otherwise, no significant differences between the three contrast media were found regarding vascular contrast and image quality. Equilibrium-phase high-spatial-resolution MR-angiography using a weakly protein-binding or even non-protein-binding contrast medium is equivalent to using a stronger protein-binding contrast medium when image acquisition is within the first 15 min after contrast injection, and allows depiction of the vasculature with high contrast and image quality. The protein-binding contrast medium was superior for imaging only later than 15 min after contrast medium injection. Copyright © 2015 John Wiley & Sons, Ltd.
CAMEL: concept annotated image libraries
NASA Astrophysics Data System (ADS)
Natsev, Apostol; Chadha, Atul; Soetarman, Basuki; Vitter, Jeffrey S.
2001-01-01
The problem of content-based image searching has received considerable attention in the last few years. Thousands of images are now available on the Internet, and many important applications require searching of images in domains such as E-commerce, medical imaging, weather prediction, satellite imagery, and so on. Yet, content-based image querying is still largely unestablished as a mainstream field, nor is it widely used by search engines. We believe that two of the major hurdles for this poor acceptance are poor retrieval quality and usability.
CAMEL: concept annotated image libraries
NASA Astrophysics Data System (ADS)
Natsev, Apostol; Chadha, Atul; Soetarman, Basuki; Vitter, Jeffrey S.
2000-12-01
The problem of content-based image searching has received considerable attention in the last few years. Thousands of images are now available on the Internet, and many important applications require searching of images in domains such as E-commerce, medical imaging, weather prediction, satellite imagery, and so on. Yet, content-based image querying is still largely unestablished as a mainstream field, nor is it widely used by search engines. We believe that two of the major hurdles for this poor acceptance are poor retrieval quality and usability.
East, James E; Vleugels, Jasper L; Roelandt, Philip; Bhandari, Pradeep; Bisschops, Raf; Dekker, Evelien; Hassan, Cesare; Horgan, Gareth; Kiesslich, Ralf; Longcroft-Wheaton, Gaius; Wilson, Ana; Dumonceau, Jean-Marc
2016-11-01
Background and aim: This technical review is an official statement of the European Society of Gastrointestinal Endoscopy (ESGE). It addresses the utilization of advanced endoscopic imaging in gastrointestinal (GI) endoscopy. Methods: This technical review is based on a systematic literature search to evaluate the evidence supporting the use of advanced endoscopic imaging throughout the GI tract. Technologies considered include narrowed-spectrum endoscopy (narrow band imaging [NBI]; flexible spectral imaging color enhancement [FICE]; i-Scan digital contrast [I-SCAN]), autofluorescence imaging (AFI), and confocal laser endomicroscopy (CLE). The Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was adopted to define the strength of recommendation and the quality of evidence. Main recommendations: 1. We suggest advanced endoscopic imaging technologies improve mucosal visualization and enhance fine structural and microvascular detail. Expert endoscopic diagnosis may be improved by advanced imaging, but as yet in community-based practice no technology has been shown consistently to be diagnostically superior to current practice with high definition white light. (Low quality evidence.) 2. We recommend the use of validated classification systems to support the use of optical diagnosis with advanced endoscopic imaging in the upper and lower GI tracts (strong recommendation, moderate quality evidence). 3. We suggest that training improves performance in the use of advanced endoscopic imaging techniques and that it is a prerequisite for use in clinical practice. A learning curve exists and training alone does not guarantee sustained high performances in clinical practice. (Weak recommendation, low quality evidence.) Conclusion: Advanced endoscopic imaging can improve mucosal visualization and endoscopic diagnosis; however it requires training and the use of validated classification systems. © Georg Thieme Verlag KG Stuttgart · New York.
TU-EF-204-02: Hiigh Quality and Sub-MSv Cerebral CT Perfusion Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ke; Niu, Kai; Wu, Yijing
2015-06-15
Purpose: CT Perfusion (CTP) imaging is of great importance in acute ischemic stroke management due to its potential to detect hypoperfused yet salvageable tissue and distinguish it from definitely unsalvageable tissue. However, current CTP imaging suffers from poor image quality and high radiation dose (up to 5 mSv). The purpose of this work was to demonstrate that technical innovations such as Prior Image Constrained Compressed Sensing (PICCS) have the potential to address these challenges and achieve high quality and sub-mSv CTP imaging. Methods: (1) A spatial-temporal 4D cascaded system model was developed to indentify the bottlenecks in the current CTPmore » technology; (2) A task-based framework was developed to optimize the CTP system parameters; (3) Guided by (1) and (2), PICCS was customized for the reconstruction of CTP source images. Digital anthropomorphic perfusion phantoms, animal studies, and preliminary human subject studies were used to validate and evaluate the potentials of using these innovations to advance the CTP technology. Results: The 4D cascaded model was validated in both phantom and canine stroke models. Based upon this cascaded model, it has been discovered that, as long as the spatial resolution and noise properties of the 4D source CT images are given, the 3D MTF and NPS of the final CTP maps can be analytically derived for a given set of processing methods and parameters. The cascaded model analysis also identified that the most critical technical factor in CTP is how to acquire and reconstruct high quality source images; it has very little to do with the denoising techniques often used after parametric perfusion calculations. This explained why PICCS resulted in a five-fold dose reduction or substantial improvement in image quality. Conclusion: Technical innovations generated promising results towards achieving high quality and sub-mSv CTP imaging for reliable and safe assessment of acute ischemic strokes. K. Li, K. Niu, Y. Wu: Nothing to disclose. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Stayman, J; Otake, Y
Purpose: To address the challenges of image quality, radiation dose, and reconstruction speed in intraoperative cone-beam CT (CBCT) for neurosurgery by combining model-based image reconstruction (MBIR) with accelerated algorithmic and computational methods. Methods: Preclinical studies involved a mobile C-arm for CBCT imaging of two anthropomorphic head phantoms that included simulated imaging targets (ventricles, soft-tissue structures/bleeds) and neurosurgical procedures (deep brain stimulation (DBS) electrode insertion) for assessment of image quality. The penalized likelihood (PL) framework was used for MBIR, incorporating a statistical model with image regularization via an edgepreserving penalty. To accelerate PL reconstruction, the ordered-subset, separable quadratic surrogates (OS-SQS) algorithmmore » was modified to incorporate Nesterov's method and implemented on a multi-GPU system. A fair comparison of image quality between PL and conventional filtered backprojection (FBP) was performed by selecting reconstruction parameters that provided matched low-contrast spatial resolution. Results: CBCT images of the head phantoms demonstrated that PL reconstruction improved image quality (∼28% higher CNR) even at half the radiation dose (3.3 mGy) compared to FBP. A combination of Nesterov's method and fast projectors yielded a PL reconstruction run-time of 251 sec (cf., 5729 sec for OS-SQS, 13 sec for FBP). Insertion of a DBS electrode resulted in severe metal artifact streaks in FBP reconstructions, whereas PL was intrinsically robust against metal artifact. The combination of noise and artifact was reduced from 32.2 HU in FBP to 9.5 HU in PL, thereby providing better assessment of device placement and potential complications. Conclusion: The methods can be applied to intraoperative CBCT for guidance and verification of neurosurgical procedures (DBS electrode insertion, biopsy, tumor resection) and detection of complications (intracranial hemorrhage). Significant improvement in image quality, dose reduction, and reconstruction time of ∼4 min will enable practical deployment of low-dose C-arm CBCT within the operating room. AAPM Research Seed Funding (2013-2014); NIH Fellowship F32EB017571; Siemens Healthcare (XP Division)« less
Prasarn, Mark L; Coyne, Ellen; Schreck, Michael; Rodgers, Jamie D; Rechtine, Glenn R
2013-07-15
Cadaveric imaging study. We sought to compare the fluoroscopic images produced by 4 different fluoroscopes for image quality and radiation exposure when used for imaging the spine. There are no previous published studies comparing mobile C-arm machines commonly used in clinical practice for imaging the spine. Anterior-posterior and lateral images of the cervical, thoracic, and lumbar spine were obtained from a cadaver placed supine on a radiolucent table. The fluoroscopy units used for the study included (1) GE OEC 9900 Elite (2010 model; General Electric Healthcare, Waukesha, WI), (2) Philips BV Pulsera (2009 model; Philips Healthcare, Andover, MA), (3) Philips BV Pulsera (2010 model; Philips Healthcare, Andover, MA), and (4) Siemens Arcadis Avantic (2010 model; Siemens Medical Solutions, Malvern, PA). The images were then downloaded, placed into a randomizer program, and evaluated by a group of spine surgeons and neuroradiologists independently. The reviewers, who were blinded to the fluoroscope the images were from, ranked them from best to worst using a numeric system. In addition, the images were rated according to a quality scale from 1 to 5, with 1 representing the best image quality. The radiation exposure level for the fluoroscopy units was also compared and was based on energy emission. According to the mean values for rank, the following order of best to worst was observed: (1) GE OEC > (2) Philips 2010 > (3) Philips 2009 > (4) Siemans. The exact same order was found when examining the image quality ratings. When comparing the radiation exposure level difference, it was observed that the OEC was the lowest, and there was a minimum 30% decrease in energy emission from the OEC versus the other C-arms studied. This is the first time that the spine image quality and radiation exposure of commonly used C-arm machines have been compared. The OEC was ranked the best, produced the best quality images, and had the least amount of radiation.
High Density Aerial Image Matching: State-Of and Future Prospects
NASA Astrophysics Data System (ADS)
Haala, N.; Cavegn, S.
2016-06-01
Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandoval, D; Mlady, G; Selwyn, R
Purpose: To bring together radiologists, technologists, and physicists to utilize post-processing techniques in digital radiography (DR) in order to optimize image acquisition and improve image quality. Methods: Sub-optimal images acquired on a new General Electric (GE) DR system were flagged for follow-up by radiologists and reviewed by technologists and medical physicists. Various exam types from adult musculoskeletal (n=35), adult chest (n=4), and pediatric (n=7) were chosen for review. 673 total images were reviewed. These images were processed using five customized algorithms provided by GE. An image score sheet was created allowing the radiologist to assign a numeric score to eachmore » of the processed images, this allowed for objective comparison to the original images. Each image was scored based on seven properties: 1) overall image look, 2) soft tissue contrast, 3) high contrast, 4) latitude, 5) tissue equalization, 6) edge enhancement, 7) visualization of structures. Additional space allowed for additional comments not captured in scoring categories. Radiologists scored the images from 1 – 10 with 1 being non-diagnostic quality and 10 being superior diagnostic quality. Scores for each custom algorithm for each image set were summed. The algorithm with the highest score for each image set was then set as the default processing. Results: Images placed into the PACS “QC folder” for image processing reasons decreased. Feedback from radiologists was, overall, that image quality for these studies had improved. All default processing for these image types was changed to the new algorithm. Conclusion: This work is an example of the collaboration between radiologists, technologists, and physicists at the University of New Mexico to add value to the radiology department. The significant amount of work required to prepare the processing algorithms, reprocessing and scoring of the images was eagerly taken on by all team members in order to produce better quality images and improve patient care.« less
NASA Astrophysics Data System (ADS)
Mazza, F.; Da Silva, M. P.; Le Callet, P.; Heynderickx, I. E. J.
2015-03-01
Multimedia quality assessment has been an important research topic during the last decades. The original focus on artifact visibility has been extended during the years to aspects as image aesthetics, interestingness and memorability. More recently, Fedorovskaya proposed the concept of 'image psychology': this concept focuses on additional quality dimensions related to human content processing. While these additional dimensions are very valuable in understanding preferences, it is very hard to define, isolate and measure their effect on quality. In this paper we continue our research on face pictures investigating which image factors influence context perception. We collected perceived fit of a set of images to various content categories. These categories were selected based on current typologies in social networks. Logistic regression was adopted to model category fit based on images features. In this model we used both low level and high level features, the latter focusing on complex features related to image content. In order to extract these high level features, we relied on crowdsourcing, since computer vision algorithms are not yet sufficiently accurate for the features we needed. Our results underline the importance of some high level content features, e.g. the dress of the portrayed person and scene setting, in categorizing image.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Improvement of Shear Wave Motion Detection Using Harmonic Imaging in Healthy Human Liver.
Amador, Carolina; Song, Pengfei; Meixner, Duane D; Chen, Shigao; Urban, Matthew W
2016-05-01
Quantification of liver elasticity is a major application of shear wave elasticity imaging (SWEI) to non-invasive assessment of liver fibrosis stages. SWEI measurements can be highly affected by ultrasound image quality. Ultrasound harmonic imaging has exhibited a significant improvement in ultrasound image quality as well as for SWEI measurements. This was previously illustrated in cardiac SWEI. The purpose of this study was to evaluate liver shear wave particle displacement detection and shear wave velocity (SWV) measurements with fundamental and filter-based harmonic ultrasound imaging. In a cohort of 17 patients with no history of liver disease, a 2.9-fold increase in maximum shear wave displacement, a 0.11 m/s decrease in the overall interquartile range and median SWV and a 17.6% increase in the success rate of SWV measurements were obtained when filter-based harmonic imaging was used instead of fundamental imaging. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography
NASA Astrophysics Data System (ADS)
Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo
2015-11-01
Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests
Retinex based low-light image enhancement using guided filtering and variational framework
NASA Astrophysics Data System (ADS)
Zhang, Shi; Tang, Gui-jin; Liu, Xiao-hua; Luo, Su-huai; Wang, Da-dong
2018-03-01
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization (CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.
Chang, Ni-Bin; Bai, Kaixu; Chen, Chi-Farn
2017-10-01
Monitoring water quality changes in lakes, reservoirs, estuaries, and coastal waters is critical in response to the needs for sustainable development. This study develops a remote sensing-based multiscale modeling system by integrating multi-sensor satellite data merging and image reconstruction algorithms in support of feature extraction with machine learning leading to automate continuous water quality monitoring in environmentally sensitive regions. This new Earth observation platform, termed "cross-mission data merging and image reconstruction with machine learning" (CDMIM), is capable of merging multiple satellite imageries to provide daily water quality monitoring through a series of image processing, enhancement, reconstruction, and data mining/machine learning techniques. Two existing key algorithms, including Spectral Information Adaptation and Synthesis Scheme (SIASS) and SMart Information Reconstruction (SMIR), are highlighted to support feature extraction and content-based mapping. Whereas SIASS can support various data merging efforts to merge images collected from cross-mission satellite sensors, SMIR can overcome data gaps by reconstructing the information of value-missing pixels due to impacts such as cloud obstruction. Practical implementation of CDMIM was assessed by predicting the water quality over seasons in terms of the concentrations of nutrients and chlorophyll-a, as well as water clarity in Lake Nicaragua, providing synergistic efforts to better monitor the aquatic environment and offer insightful lake watershed management strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Underwater image enhancement through depth estimation based on random forest
NASA Astrophysics Data System (ADS)
Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han
2017-11-01
Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, Ehsan, E-mail: samei@duke.edu; Richard, Samuel
2015-01-15
Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD,more » Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR indicated a 46%–84% dose reduction potential, depending on task, without compromising the modeled detection performance. Conclusions: The presented methodology based on ACR phantom measurements extends current possibilities for the assessment of CT image quality under the complex resolution and noise characteristics exhibited with statistical and iterative reconstruction algorithms. The findings further suggest that MBIR can potentially make better use of the projections data to reduce CT dose by approximately a factor of 2. Alternatively, if the dose held unchanged, it can improve image quality by different levels for different tasks.« less
Wang, Yali; Hamal, Preeti; You, Xiaofang; Mao, Haixia; Li, Fei; Sun, Xiwen
2017-01-01
The aim of this study was to assess whether CT imaging using an ultra-high-resolution CT (UHRCT) scan with a small scan field of view (FOV) provides higher image quality and helps to reduce the follow-up period compared with a conventional high-resolution CT (CHRCT) scan. We identified patients with at least one pulmonary nodule at our hospital from July 2015 to November 2015. CHRCT and UHRCT scans were conducted in all enrolled patients. Three experienced radiologists evaluated the image quality using a 5-point score and made diagnoses. The paired images were displayed side by side in a random manner and annotations of scan information were removed. The following parameters including image quality, diagnostic confidence of radiologists, follow-up recommendations and diagnostic accuracy were assessed. A total of 52 patients (62 nodules) were included in this study. UHRCT scan provides a better image quality regarding the margin of nodules and solid internal component compared to that of CHRCT (P < 0.05). Readers have higher diagnostic confidence based on the UHRCT images than of CHRCT images (P<0.05). The follow-up recommendations were significantly different between UHRCT and CHRCT images (P<0.05). Compared with the surgical pathological findings, UHRCT had a relative higher diagnostic accuracy than CHRCT (P > 0.05). These findings suggest that the UHRCT prototype scanner provides a better image quality of subsolid nodules compared to CHRCT and contributes significantly to reduce the patients' follow-up period. PMID:28231320
De Crop, An; Casselman, Jan; Van Hoof, Tom; Dierens, Melissa; Vereecke, Elke; Bossu, Nicolas; Pamplona, Jaime; D'Herde, Katharina; Thierens, Hubert; Bacher, Klaus
2015-08-01
Metal artifacts may negatively affect radiologic assessment in the oral cavity. The aim of this study was to evaluate different metal artifact reduction techniques for metal artifacts induced by dental hardware in CT scans of the oral cavity. Clinical image quality was assessed using a Thiel-embalmed cadaver. A Catphan phantom and a polymethylmethacrylate (PMMA) phantom were used to evaluate physical-technical image quality parameters such as artifact area, artifact index (AI), and contrast detail (IQFinv). Metal cylinders were inserted in each phantom to create metal artifacts. CT images of both phantoms and the Thiel-embalmed cadaver were acquired on a multislice CT scanner using 80, 100, 120, and 140 kVp; model-based iterative reconstruction (Veo); and synthesized monochromatic keV images with and without metal artifact reduction software (MARs). Four radiologists assessed the clinical image quality, using an image criteria score (ICS). Significant influence of increasing kVp and the use of Veo was found on clinical image quality (p = 0.007 and p = 0.014, respectively). Application of MARs resulted in a smaller artifact area (p < 0.05). However, MARs reconstructed images resulted in lower ICS. Of all investigated techniques, Veo shows to be most promising, with a significant improvement of both the clinical and physical-technical image quality without adversely affecting contrast detail. MARs reconstruction in CT images of the oral cavity to reduce dental hardware metallic artifacts is not sufficient and may even adversely influence the image quality.
JPEG vs. JPEG 2000: an objective comparison of image encoding quality
NASA Astrophysics Data System (ADS)
Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan
2004-11-01
This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.
2012-05-01
employs kilovoltage (KV) cone- beam CT (CBCT) for guiding treatment. High quality CBCT images are important in achieving improved treatment effect...necessary for achieving successful adaptive RT. Kilovoltage cone-beam CT (CBCT) has shown its capability of yielding such images to guide the prostate cancer...study of low-dose intra-operative cone-beam CT for image- guided surgery,” Proc. SPIE, 7961, 79615P, 2011 10. X. Han, E. Pearson, J. Bian, S. Cho, E. Y
NASA Astrophysics Data System (ADS)
Szczepura, Katy; Thompson, John; Manning, David
2017-03-01
In computed tomography the Hounsfield Units (HU) are used as an indicator of the tissue type based on the linear attenuation coefficients of the tissue. HU accuracy is essential when this metric is used in any form to support diagnosis. In hybrid imaging, such as SPECT/CT and PET/CT, the information is used for attenuation correction (AC) of the emission images. This work investigates the HU accuracy of nodules of known size and HU, comparing diagnostic quality (DQ) images with images used for AC.
NASA Astrophysics Data System (ADS)
Damera-Venkata, Niranjan; Yen, Jonathan
2003-01-01
A Visually significant two-dimensional barcode (VSB) developed by Shaked et. al. is a method used to design an information carrying two-dimensional barcode, which has the appearance of a given graphical entity such as a company logo. The encoding and decoding of information using the VSB, uses a base image with very few graylevels (typically only two). This typically requires the image histogram to be bi-modal. For continuous-tone images such as digital photographs of individuals, the representation of tone or "shades of gray" is not only important to obtain a pleasing rendition of the face, but in most cases, the VSB renders these images unrecognizable due to its inability to represent true gray-tone variations. This paper extends the concept of a VSB to an image bar code (IBC). We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base-images such as those acquired with a digital camera. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. The IBC supports a high information capacity that differentiates it from common hardcopy watermarks. The reason for the improved image quality over the VSB is a joint encoding/halftoning strategy based on a modified version of block error diffusion. Encoder stability, image quality vs. information capacity tradeoffs and decoding issues with and without explicit knowledge of the base-image are discussed.
Telemedicine-based system for quality management and peer review in radiology.
Morozov, Sergey; Guseva, Ekaterina; Ledikhova, Natalya; Vladzymyrskyy, Anton; Safronov, Dmitry
2018-06-01
Quality assurance is the key component of modern radiology. A telemedicine-based quality assurance system helps to overcome the "scoring" approach and makes the quality control more accessible and objective. A concept for quality assurance in radiology is developed. Its realization is a set of strategies, actions, and tools. The latter is based on telemedicine-based peer review of 23,199 computed tomography (CT) and magnetic resonance imaging (MRI) images. The conception of the system for quality management in radiology represents a chain of actions: "discrepancies evaluation - routine support - quality improvement activity - discrepancies evaluation". It is realized by an audit methodology, telemedicine, elearning, and other technologies. After a year of systemic telemedicine-based peer reviews, the authors have estimated that clinically significant discrepancies were detected in 6% of all cases, while clinically insignificant ones were found in 19% of cases. Most often, problems appear in musculoskeletal records; 80% of the examinations have diagnostic or technical imperfections. The presence of routine telemedicine support and personalized elearning allowed improving the diagnostics quality. The level of discrepancies has decreased significantly (p < 0.05). The telemedicine-based peer review system allows improving radiology departments' network effectiveness. • "Scoring" approach to radiologists' performance assessment must be changed. • Telemedicine peer review and personalized elearning significantly decrease the number of discrepancies. • Teleradiology allows linking all primary-level hospitals to a common peer review network.
Zhang, Fan; Zhang, Xinhong
2011-01-01
Most of classification, quality evaluation or grading of the flue-cured tobacco leaves are manually operated, which relies on the judgmental experience of experts, and inevitably limited by personal, physical and environmental factors. The classification and the quality evaluation are therefore subjective and experientially based. In this paper, an automatic classification method of tobacco leaves based on the digital image processing and the fuzzy sets theory is presented. A grading system based on image processing techniques was developed for automatically inspecting and grading flue-cured tobacco leaves. This system uses machine vision for the extraction and analysis of color, size, shape and surface texture. Fuzzy comprehensive evaluation provides a high level of confidence in decision making based on the fuzzy logic. The neural network is used to estimate and forecast the membership function of the features of tobacco leaves in the fuzzy sets. The experimental results of the two-level fuzzy comprehensive evaluation (FCE) show that the accuracy rate of classification is about 94% for the trained tobacco leaves, and the accuracy rate of the non-trained tobacco leaves is about 72%. We believe that the fuzzy comprehensive evaluation is a viable way for the automatic classification and quality evaluation of the tobacco leaves. PMID:22163744
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.
Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan
2016-12-14
Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.
Characteristics of knowledge content in a curated online evidence library.
Varada, Sowmya; Lacson, Ronilda; Raja, Ali S; Ip, Ivan K; Schneider, Louise; Osterbur, David; Bain, Paul; Vetrano, Nicole; Cellini, Jacqueline; Mita, Carol; Coletti, Margaret; Whelan, Julia; Khorasani, Ramin
2018-05-01
To describe types of recommendations represented in a curated online evidence library, report on the quality of evidence-based recommendations pertaining to diagnostic imaging exams, and assess underlying knowledge representation. The evidence library is populated with clinical decision rules, professional society guidelines, and locally developed best practice guidelines. Individual recommendations were graded based on a standard methodology and compared using chi-square test. Strength of evidence ranged from grade 1 (systematic review) through grade 5 (recommendations based on expert opinion). Finally, variations in the underlying representation of these recommendations were identified. The library contains 546 individual imaging-related recommendations. Only 15% (16/106) of recommendations from clinical decision rules were grade 5 vs 83% (526/636) from professional society practice guidelines and local best practice guidelines that cited grade 5 studies (P < .0001). Minor head trauma, pulmonary embolism, and appendicitis were topic areas supported by the highest quality of evidence. Three main variations in underlying representations of recommendations were "single-decision," "branching," and "score-based." Most recommendations were grade 5, largely because studies to test and validate many recommendations were absent. Recommendation types vary in amount and complexity and, accordingly, the structure and syntax of statements they generate. However, they can be represented in single-decision, branching, and score-based representations. In a curated evidence library with graded imaging-based recommendations, evidence quality varied widely, with decision rules providing the highest-quality recommendations. The library may be helpful in highlighting evidence gaps, comparing recommendations from varied sources on similar clinical topics, and prioritizing imaging recommendations to inform clinical decision support implementation.
A real time quality control application for animal production by image processing.
Sungur, Cemil; Özkan, Halil
2015-11-01
Standards of hygiene and health are of major importance in food production, and quality control has become obligatory in this field. Thanks to rapidly developing technologies, it is now possible for automatic and safe quality control of food production. For this purpose, image-processing-based quality control systems used in industrial applications are being employed to analyze the quality of food products. In this study, quality control of chicken (Gallus domesticus) eggs was achieved using a real time image-processing technique. In order to execute the quality control processes, a conveying mechanism was used. Eggs passing on a conveyor belt were continuously photographed in real time by cameras located above the belt. The images obtained were processed by various methods and techniques. Using digital instrumentation, the volume of the eggs was measured, broken/cracked eggs were separated and dirty eggs were determined. In accordance with international standards for classifying the quality of eggs, the class of separated eggs was determined through a fuzzy implication model. According to tests carried out on thousands of eggs, a quality control process with an accuracy of 98% was possible. © 2014 Society of Chemical Industry.
Reznicek, Lukas; Klein, Thomas; Wieser, Wolfgang; Kernt, Marcus; Wolf, Armin; Haritoglou, Christos; Kampik, Anselm; Huber, Robert; Neubauer, Aljoscha S
2014-06-01
To investigate the image quality of wide-angle cross-sectional and reconstructed fundus images based on ultra-megahertz swept-source Fourier domain mode locking (FDML) OCT compared to current generation diagnostic devices. A 1,050 nm swept-source FDML OCT system was constructed running at 1.68 MHz A-scan rate covering approximately 70° field of view. Twelve normal eyes were imaged with the device applying an isotropically dense sampling protocol (1,900 × 1,900 A-scans) with a fill factor of 100 %. Obtained OCT scan image quality was compared with two commercial OCT systems (Heidelberg Spectralis and Stratus OCT) of the same 12 eyes. Reconstructed en-face fundus images from the same FDML-OCT data set were compared to color fundus, infrared and ultra-wide-field scanning laser images (SLO). Comparison of cross-sectional scans showed a high overall image quality of the 15× averaged FDML images at 1.68 MHz [overall quality grading score: 8.42 ± 0.52, range 0 (bad)-10 (excellent)] comparable to current spectral-domain OCTs (overall quality grading score: 8.83 ± 0.39, p = 0.731). On FDML OCT, a dense 3D data set was obtained covering also the central and mid-peripheral retina. The reconstructed FDML OCT en-face fundus images had high image quality comparable to scanning laser ophthalmoscope (SLO) as judged from retinal structures such as vessels and optic disc. Overall grading score was 8.36 ± 0.51 for FDML OCT vs 8.27 ± 0.65 for SLO (p = 0.717). Ultra-wide-field megahertz 3D FDML OCT at 1.68 MHz is feasible, and provides cross-sectional image quality comparable to current spectral-domain OCT devices. In addition, reconstructed en-face visualization of fundus images result in a wide-field view with high image quality as compared to currently available fundus imaging devices. The improvement of >30× in imaging speed over commercial spectral-domain OCT technology enables high-density scan protocols leading to a data set for high quality cross-sectional and en-face images of the posterior segment.
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma
2016-11-01
To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
Significance of perceptually relevant image decolorization for scene classification
NASA Astrophysics Data System (ADS)
Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl
2017-11-01
Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.
Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus
2008-03-01
Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.
[Quality control of laser imagers].
Winkelbauer, F; Ammann, M; Gerstner, N; Imhof, H
1992-11-01
Multiformat imagers based on laser systems are used for documentation in an increasing number of investigations. The specific problems of quality control are explained and the persistence of film processing in these imager systems of different configuration with (Machine 1: 3M-Laser-Imager-Plus M952 with connected 3M Film-Processor, 3M-Film IRB, X-Rax Chemical Mixer 3M-XPM, 3M-Developer and Fixer) or without (Machine 2: 3M-Laser-Imager-Plus M952 with separate DuPont-Cronex Film-processor, Kodak IR-Film, Kodak Automixer, Kodak-Developer and Fixer) connected film processing unit are investigated. In our checking based on DIN 6868 and ONORM S 5240 we found persistence of film processing in the equipment with directly adapted film processing unit according to DIN and ONORM. The checking of film persistence as demanded by DIN 6868 in these equipment could therefore be performed in longer periods. Systems with conventional darkroom processing comparatively show plain increased fluctuation, and hence the demanded daily control is essential to guarantee appropriate reaction and constant quality of documentation.
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
McDonald, James E; Kessler, Marcus M; Hightower, Jeremy L; Henry, Susan D; Deloney, Linda A
2013-12-01
With increasing volumes of complex imaging cases and rising economic pressure on physician staffing, timely reporting will become progressively challenging. Current and planned iterations of PACS and electronic medical record systems do not offer workflow management tools to coordinate delivery of imaging interpretations with the needs of the patient and ordering physician. The adoption of a server-based enterprise collaboration software system by our Division of Nuclear Medicine has significantly improved our efficiency and quality of service.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boellaard, Ronald, E-mail: r.boellaard@vumc.nl; European Association of Nuclear Medicine Research Ltd., Vienna 1060; European Association of Nuclear Medicine Physics Committee, Vienna 1060
2015-10-15
Purpose: Integrated positron emission tomography/magnetic resonance (PET/MR) systems derive the PET attenuation correction (AC) from dedicated MR sequences. While MR-AC performs reasonably well in clinical patient imaging, it may fail for phantom-based quality control (QC). The authors assess the applicability of different protocols for PET QC in multicenter PET/MR imaging. Methods: The National Electrical Manufacturers Association NU 2 2007 image quality phantom was imaged on three combined PET/MR systems: a Philips Ingenuity TF PET/MR, a Siemens Biograph mMR, and a GE SIGNA PET/MR (prototype) system. The phantom was filled according to the EANM FDG-PET/CT guideline 1.0 and scanned for 5more » min over 1 bed. Two MR-AC imaging protocols were tested: standard clinical procedures and a dedicated protocol for phantom tests. Depending on the system, the dedicated phantom protocol employs a two-class (water and air) segmentation of the MR data or a CT-based template. Differences in attenuation- and SUV recovery coefficients (RC) are reported. PET/CT-based simulations were performed to simulate the various artifacts seen in the AC maps (μ-map) and their impact on the accuracy of phantom-based QC. Results: Clinical MR-AC protocols caused substantial errors and artifacts in the AC maps, resulting in underestimations of the reconstructed PET activity of up to 27%, depending on the PET/MR system. Using dedicated phantom MR-AC protocols, PET bias was reduced to −8%. Mean and max SUV RC met EARL multicenter PET performance specifications for most contrast objects, but only when using the dedicated phantom protocol. Simulations confirmed the bias in experimental data to be caused by incorrect AC maps resulting from the use of clinical MR-AC protocols. Conclusions: Phantom-based quality control of PET/MR systems in a multicenter, multivendor setting may be performed with sufficient accuracy, but only when dedicated phantom acquisition and processing protocols are used for attenuation correction.« less
NASA Astrophysics Data System (ADS)
Umehara, Kensuke; Ota, Junko; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
Single image super-resolution (SR) method can generate a high-resolution (HR) image from a low-resolution (LR) image by enhancing image resolution. In medical imaging, HR images are expected to have a potential to provide a more accurate diagnosis with the practical application of HR displays. In recent years, the super-resolution convolutional neural network (SRCNN), which is one of the state-of-the-art deep learning based SR methods, has proposed in computer vision. In this study, we applied and evaluated the SRCNN scheme to improve the image quality of magnified images in chest radiographs. For evaluation, a total of 247 chest X-rays were sampled from the JSRT database. The 247 chest X-rays were divided into 93 training cases with non-nodules and 152 test cases with lung nodules. The SRCNN was trained using the training dataset. With the trained SRCNN, the HR image was reconstructed from the LR one. We compared the image quality of the SRCNN and conventional image interpolation methods, nearest neighbor, bilinear and bicubic interpolations. For quantitative evaluation, we measured two image quality metrics, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the SRCNN scheme, PSNR and SSIM were significantly higher than those of three interpolation methods (p<0.001). Visual assessment confirmed that the SRCNN produced much sharper edge than conventional interpolation methods without any obvious artifacts. These preliminary results indicate that the SRCNN scheme significantly outperforms conventional interpolation algorithms for enhancing image resolution and that the use of the SRCNN can yield substantial improvement of the image quality of magnified images in chest radiographs.
Abiraj, Keelara; Jaccard, Hugues; Kretzschmar, Martin; Helm, Lothar; Maecke, Helmut R
2008-07-28
Dimeric peptidic vectors, obtained by the divalent grafting of bombesin analogues on a newly synthesized DOTA-based prochelator, showed improved qualities as tumor targeted imaging probes in comparison to their monomeric analogues.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image
NASA Astrophysics Data System (ADS)
He, Xingwu; You, Junchen
2018-03-01
Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.
Steer-PROP: a GRASE-PROPELLER sequence with interecho steering gradient pulses.
Srinivasan, Girish; Rangwala, Novena; Zhou, Xiaohong Joe
2018-05-01
This study demonstrates a novel PROPELLER (periodically rotated overlapping parallel lines with enhanced reconstruction) pulse sequence, termed Steer-PROP, based on gradient and spin echo (GRASE), to reduce the imaging times and address phase errors inherent to GRASE. The study also illustrates the feasibility of using Steer-PROP as an alternative to single-shot echo planar imaging (SS-EPI) to produce distortion-free diffusion images in all imaging planes. Steer-PROP uses a series of blip gradient pulses to produce N (N = 3-5) adjacent k-space blades in each repetition time, where N is the number of gradient echoes in a GRASE sequence. This sampling strategy enables a phase correction algorithm to systematically address the GRASE phase errors as well as the motion-induced phase inconsistency. Steer-PROP was evaluated on phantoms and healthy human subjects at both 1.5T and 3.0T for T 2 - and diffusion-weighted imaging. Steer-PROP produced similar image quality as conventional PROPELLER based on fast spin echo (FSE), while taking only a fraction (e.g., 1/3) of the scan time. The robustness against motion in Steer-PROP was comparable to that of FSE-based PROPELLER. Using Steer-PROP, high quality and distortion-free diffusion images were obtained from human subjects in all imaging planes, demonstrating a considerable advantage over SS-EPI. The proposed Steer-PROP sequence can substantially reduce the scan times compared with FSE-based PROPELLER while achieving adequate image quality. The novel k-space sampling strategy in Steer-PROP not only enables an integrated phase correction method that addresses various sources of phase errors, but also minimizes the echo spacing compared with alternative sampling strategies. Steer-PROP can also be a viable alternative to SS-EPI to decrease image distortion in all imaging planes. Magn Reson Med 79:2533-2541, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Smartphone-based grading of apple quality
NASA Astrophysics Data System (ADS)
Li, Xianglin; Li, Ting
2018-02-01
Apple quality grading is a critical issue in apple industry which is one economical pillar of many countries. Artificial grading is inefficient and of poor accuracy. Here we proposed to develop a portable, convenient, real-time, and low cost method aimed at grading apple. Color images of the apples were collected with a smartphone and the grade of sampled apple was assessed by a customized smartphone app, which offered the functions translating RGB color values of the apple to color grade and translating the edge of apple image to weight grade. The algorithms are based on modeling with a large number of apple image at different grades. The apple grade data evaluated by the smartphone are in accordance with the actual data. This study demonstrated the potential of smart phone in apple quality grading/online monitoring at gathering and transportation stage for apple industry.
Improved decryption quality and security of a joint transform correlator-based encryption system
NASA Astrophysics Data System (ADS)
Vilardy, Juan M.; Millán, María S.; Pérez-Cabré, Elisabet
2013-02-01
Some image encryption systems based on modified double random phase encoding and joint transform correlator architecture produce low quality decrypted images and are vulnerable to a variety of attacks. In this work, we analyse the algorithm of some reported methods that optically implement the double random phase encryption in a joint transform correlator. We show that it is possible to significantly improve the quality of the decrypted image by introducing a simple nonlinear operation in the encrypted function that contains the joint power spectrum. This nonlinearity also makes the system more resistant to chosen-plaintext attacks. We additionally explore the system resistance against this type of attack when a variety of probability density functions are used to generate the two random phase masks of the encryption-decryption process. Numerical results are presented and discussed.
O'Brien, Kieran; Daducci, Alessandro; Kickler, Nils; Lazeyras, Francois; Gruetter, Rolf; Feiweier, Thorsten; Krueger, Gunnar
2013-08-01
Clinical use of the Stejskal-Tanner diffusion weighted images is hampered by the geometric distortions that result from the large residual 3-D eddy current field induced. In this work, we aimed to predict, using linear response theory, the residual 3-D eddy current field required for geometric distortion correction based on phantom eddy current field measurements. The predicted 3-D eddy current field induced by the diffusion-weighting gradients was able to reduce the root mean square error of the residual eddy current field to ~1 Hz. The model's performance was tested on diffusion weighted images of four normal volunteers, following distortion correction, the quality of the Stejskal-Tanner diffusion-weighted images was found to have comparable quality to image registration based corrections (FSL) at low b-values. Unlike registration techniques the correction was not hindered by low SNR at high b-values, and results in improved image quality relative to FSL. Characterization of the 3-D eddy current field with linear response theory enables the prediction of the 3-D eddy current field required to correct eddy current induced geometric distortions for a wide range of clinical and high b-value protocols.
Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.
2016-01-01
Abstract. The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment. PMID:27493982
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernatowicz, K., E-mail: kingab@student.ethz.ch; Knopf, A.; Lomax, A.
Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CTmore » can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results: Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10{sup −19}). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%–1.4%), false positives (4.0%–2.6%), and false negatives (2.7%–1.3%). These percentage reductions correspond to gating reducing image artifacts by 24–90 cm{sup 3} of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. Conclusions: For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm{sup 3} of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.« less
Guziński, Maciej; Waszczuk, Łukasz; Sąsiadek, Marek J
2016-10-01
To evaluate head CT protocol developed to improve visibility of the brainstem and cerebellum, lower bone-related artefacts in the posterior fossa and maintain patient radioprotection. A paired comparison of head CT performed without Adaptive Statistical Iterative Reconstruction (ASiR) and a clinically indicated follow-up with 40 % ASiR was acquired in one group of 55 patients. Patients were scanned in the axial mode with different scanner settings for the brain and the posterior fossa. Objective image quality analysis was performed with signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Subjective image quality analysis was based on brain structure visibility and evaluation of the artefacts. We achieved 19 % reduction of total DLP and significantly better image quality of posterior fossa structures. SNR for white and grey matter in the cerebellum were 34 % to 36 % higher, respectively, CNR was improved by 142 % and subjective analyses were better for images with ASiR. When imaging parameters are set independently for the brain and the posterior fossa imaging, ASiR has a great potential to improve CT performance: image quality of the brainstem and cerebellum is improved, and radiation dose for the brain as well as total radiation dose are reduced. •With ASiR it is possible to lower radiation dose or improve image quality •Sequentional imaging allows setting scan parameters for brain and posterior-fossa independently •We improved visibility of brainstem structures and decreased radiation dose •Total radiation dose (DLP) was decreased by 19.
Mann, April; Farrell, Mary Beth; Williams, Jessica; Basso, Danny
2017-06-01
In 2015, the Society of Nuclear Medicine and Molecular Imaging Technologist Section (SNMMI-TS) launched a multiyear quality initiative to help prepare the technologist workforce for an evidence-based health-care delivery system that focuses on quality. To best implement the quality strategy, the SNMMI-TS first surveyed technologists to ascertain their perception of quality and current measurement of quality indicators. Methods: An internet survey was sent to 27,989 e-mail contacts. Questions related to demographic data, perceptions of quality, quality measurement, and opinions on the minimum level of education are discussed in this article. Results: A total of 4,007 (14.3%) responses were received. When asked to list 3 words or phrases that represent quality, there were a plethora of different responses. The top 3 responses were image quality, quality control, and technologist education or competency. Surveying patient satisfaction was the most common quality measure (80.9%), followed by evaluation of image quality (78.2%). Evaluation of image quality (90.3%) and equipment functionality (89.4%) were considered the most effective measures. Technologists' differentiation between quality, quality improvement, quality control, quality assurance, and quality assessment seemed ambiguous. Respondents were confident in their ability to assess and improve quality at their workplace (91.9%) and agreed their colleagues were committed to delivering quality work. Of note, 70.7% of respondents believed that quality is directly related to the technologist's level of education. Correspondingly, respondents felt there should be a minimum level of education (99.5%) and that certification or registry should be required (74.4%). Most respondents (59.6%) felt that a Bachelor's degree should be the minimum level of education, followed by an Associate's degree (40.4%). Conclusion: To best help nuclear medicine technologists provide quality care, the SNMMI-TS queried technologists to discern perceptions of quality in nuclear medicine. The results show that technologists believe image quality and quality control are the most important determinants. Most respondents felt that quality is directly related to the level of education of the technologist acquiring the scan. However, the responses obtained also demonstrated variation in perception of what represents quality. The SNMMI-TS can use the results of the study as a benchmark of current technologists' knowledge and performance of quality measures and target educational programs to improve the quality of nuclear medicine and molecular imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Applications based on restored satellite images
NASA Astrophysics Data System (ADS)
Arbel, D.; Levin, S.; Nir, M.; Bhasteker, I.
2005-08-01
Satellites orbit the earth and obtain imagery of the ground below. The quality of satellite images is affected by the properties of the atmospheric imaging path, which degrade the image by blurring it and reducing its contrast. Applications involving satellite images are many and varied. Imaging systems are also different technologically and in their physical and optical characteristics such as sensor types, resolution, field of view (FOV), spectral range of the acquiring channels - from the visible to the thermal IR (TIR), platforms (mobilization facilities; aircrafts and/or spacecrafts), altitude above ground surface etc. It is important to obtain good quality satellite images because of the variety of applications based on them. The more qualitative is the recorded image, the more information is yielded from the image. The restoration process is conditioned by gathering much data about the atmospheric medium and its characterization. In return, there is a contribution to the applications based on those restorations i.e., satellite communication, warfare against long distance missiles, geographical aspects, agricultural aspects, economical aspects, intelligence, security, military, etc. Several manners to use restored Landsat 7 enhanced thematic mapper plus (ETM+) satellite images are suggested and presented here. In particular, using the restoration results for few potential geographical applications such as color classification and mapping (roads and streets localization) methods.
Optimized multiple linear mappings for single image super-resolution
NASA Astrophysics Data System (ADS)
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
[A new concept for integration of image databanks into a comprehensive patient documentation].
Schöll, E; Holm, J; Eggli, S
2001-05-01
Image processing and archiving are of increasing importance in the practice of modern medicine. Particularly due to the introduction of computer-based investigation methods, physicians are dealing with a wide variety of analogue and digital picture archives. On the other hand, clinical information is stored in various text-based information systems without integration of image components. The link between such traditional medical databases and picture archives is a prerequisite for efficient data management as well as for continuous quality control and medical education. At the Department of Orthopedic Surgery, University of Berne, a software program was developed to create a complete multimedia electronic patient record. The client-server system contains all patients' data, questionnaire-based quality control, and a digital picture archive. Different interfaces guarantee the integration into the hospital's data network. This article describes our experiences in the development and introduction of a comprehensive image archiving system at a large orthopedic center.
Lin, Ming-Fang; Chen, Chia-Yuen; Lee, Yuan-Hao; Li, Chia-Wei; Gerweck, Leo E; Wang, Hao; Chan, Wing P
2018-01-01
Background Multiple rounds of head computed tomography (CT) scans increase the risk of radiation-induced lens opacification. Purpose To investigate the effects of CT eye shielding and topogram-based tube current modulation (TCM) on the radiation dose received by the lens and the image quality of nasal and periorbital imaging. Material and Methods An anthropomorphic phantom was CT-scanned using either automatic tube current modulation or a fixed tube current. The lens radiation dose was estimated using cropped Gafchromic films irradiated with or without a shield over the orbit. Image quality, assessed using regions of interest drawn on the bilateral extraorbital areas and the nasal bone with a water-based marker, was evaluated using both a signal-to-noise ratio (SNR) and contrast-noise ratio (CNR). Two CT specialists independently assessed image artifacts using a three-point Likert scale. Results The estimated radiation dose received by the lens was significantly lower when barium sulfate or bismuth-antimony shields were used in conjunction with a fixed tube current (22.0% and 35.6% reduction, respectively). Topogram-based TCM mitigated the beam hardening-associated artifacts of bismuth-antimony and barium sulfate shields. This increased the SNR by 21.6% in the extraorbital region and the CNR by 7.2% between the nasal bones and extraorbital regions. The combination of topogram-based TCM and barium sulfate or bismuth-antimony shields reduced lens doses by 12.2% and 27.2%, respectively. Conclusion Image artifacts induced by the bismuth-antimony shield at a fixed tube current for lenticular radioprotection were significantly reduced by topogram-based TCM, which increased the SNR of the anthropomorphic nasal bones and periorbital tissues.
NASA Astrophysics Data System (ADS)
Agarwal, Smriti; Singh, Dharmendra
2016-04-01
Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.
NASA Astrophysics Data System (ADS)
Geniusz, Malwina
2017-09-01
The best treatment for cataract patients, which allows to restore clear vision is implanting an artificial intraocular lens (IOL). The image quality of the lens has a significant impact on the quality of patient's vision. After a long exposure the implant to aqueous environment some defects appear in the artificial lenses. The defects generated in the IOL have different refractive indices. For example, glistening phenomenon is based on light scattering on the oval microvacuoles filled with an aqueous humor which refractive index value is about 1.34. Calcium deposits are another example of lens defects and they can be characterized by the refractive index 1.63. In the presented studies it was calculated how the difference between the refractive indices of the defect and the refractive index of the lens material affects the quality of image. The OpticStudio Professional program (from Radiant Zemax, LLC) was used for the construction of the numerical model of the eye with IOL and to calculate the characteristics of the retinal image. Retinal image quality was described in such characteristics as Point Spread Function (PSF) and the Optical Transfer Function with amplitude and phase. The results show a strong correlation between the refractive indices difference and retinal image quality.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Computer-generated holograms and diffraction gratings in optical security applications
NASA Astrophysics Data System (ADS)
Stepien, Pawel J.
2000-04-01
The term 'computer generated hologram' (CGH) describes a diffractive structure strictly calculated and recorded to diffract light in a desired way. The CGH surface profile is a result of the wavefront calculation rather than of interference. CGHs are able to form 2D and 3D images. Optically, variable devices (OVDs) composed of diffractive gratings are often used in security applications. There are various types of optically and digitally recorded gratings in security applications. Grating based OVDs are used to record bright 2D images with limited range of cinematic effects. These effects result form various orientations or densities of recorded gratings. It is difficult to record high quality OVDs of 3D objects using gratings. Stereo grams and analogue rainbow holograms offer 3D imaging, but they are darker and have lower resolution than grating OVDs. CGH based OVDs contains unlimited range of cinematic effects and high quality 3D images. Images recorded using CGHs are usually more noisy than grating based OVDs, because of numerical inaccuracies in CGH calculation and mastering. CGH based OVDs enable smooth integration of hidden and machine- readable features within an OVD design.
Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.
2015-01-01
Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086
NASA Astrophysics Data System (ADS)
Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
A simple parametric model observer for quality assurance in computer tomography
NASA Astrophysics Data System (ADS)
Anton, M.; Khanin, A.; Kretz, T.; Reginatto, M.; Elster, C.
2018-04-01
Model observers are mathematical classifiers that are used for the quality assessment of imaging systems such as computer tomography. The quality of the imaging system is quantified by means of the performance of a selected model observer. For binary classification tasks, the performance of the model observer is defined by the area under its ROC curve (AUC). Typically, the AUC is estimated by applying the model observer to a large set of training and test data. However, the recording of these large data sets is not always practical for routine quality assurance. In this paper we propose as an alternative a parametric model observer that is based on a simple phantom, and we provide a Bayesian estimation of its AUC. It is shown that a limited number of repeatedly recorded images (10–15) is already sufficient to obtain results suitable for the quality assessment of an imaging system. A MATLAB® function is provided for the calculation of the results. The performance of the proposed model observer is compared to that of the established channelized Hotelling observer and the nonprewhitening matched filter for simulated images as well as for images obtained from a low-contrast phantom on an x-ray tomography scanner. The results suggest that the proposed parametric model observer, along with its Bayesian treatment, can provide an efficient, practical alternative for the quality assessment of CT imaging systems.
Global quality imaging: improvement actions.
Lau, Lawrence S; Pérez, Maria R; Applegate, Kimberly E; Rehani, Madan M; Ringertz, Hans G; George, Robert
2011-05-01
Workforce shortage, workload increase, workplace changes, and budget challenges are emerging issues around the world, which could place quality imaging at risk. It is important for imaging stakeholders to collaborate, ensure patient safety, improve the quality of care, and address these issues. There is no single panacea. A range of improvement measures, strategies, and actions are required. Examples of improvement actions supporting the 3 quality measures are described under 5 strategies: conducting research, promoting awareness, providing education and training, strengthening infrastructure, and implementing policies. The challenge is to develop long-term, cost-effective, system-based improvement actions that will bring better outcomes and underpin a sustainable future for quality imaging. In an imaging practice, these actions will result in selecting the right procedure (justification), using the right dose (optimization), and preventing errors along the patient journey. To realize this vision and implement these improvement actions, a range of expertise and adequate resources are required. Stakeholders should collaborate and work together. In today's globalized environment, collaboration is strength and provides synergy to achieve better outcomes and greater success. Copyright © 2011 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.
Diagnostic ultrasound at MACH 20: retroperitoneal and pelvic imaging in space.
Jones, J A; Sargsyan, A E; Barr, Y R; Melton, S; Hamilton, D R; Dulchavsky, S A; Whitson, P A
2009-07-01
An operationally available diagnostic imaging capability augments spaceflight medical support by facilitating the diagnosis, monitoring and treatment of medical or surgical conditions, by improving medical outcomes and, thereby, by lowering medical mission impacts and the probability of crew evacuation due to medical causes. Microgravity-related physiological changes occurring during spaceflight can affect the genitourinary system and potentially cause conditions such as urinary retention or nephrolithiasis for which ultrasonography (U/S) would be a useful diagnostic tool. This study describes the first genitourinary ultrasound examination conducted in space, and evaluates image quality, frame rate, resolution requirements, real-time remote guidance of nonphysician crew medical officers and evaluation of on-orbit tools that can augment image acquisition. A nonphysician crew medical officer (CMO) astronaut, with minimal training in U/S, performed a self-examination of the genitourinary system onboard the International Space Station, using a Philips/ATL Model HDI-5000 ultrasound imaging unit located in the International Space Station Human Research Facility. The CMO was remotely guided by voice commands from experienced, earth-based sonographers stationed in Mission Control Center in Houston. The crewmember, with guidance, was able to acquire all of the target images. Real-time and still U/S images received at Mission Control Center in Houston were of sufficient quality for the images to be diagnostic for multiple potential genitourinary applications. Microgravity-based ultrasound imaging can provide diagnostic quality images of the retroperitoneum and pelvis, offering improved diagnosis and treatment for onboard medical contingencies. Successful completion of complex sonographic examinations can be obtained even with minimally trained nonphysician ultrasound operators, with the assistance of ground-based real-time guidance.
Verification technology of remote sensing camera satellite imaging simulation based on ray tracing
NASA Astrophysics Data System (ADS)
Gu, Qiongqiong; Chen, Xiaomei; Yang, Deyun
2017-08-01
Remote sensing satellite camera imaging simulation technology is broadly used to evaluate the satellite imaging quality and to test the data application system. But the simulation precision is hard to examine. In this paper, we propose an experimental simulation verification method, which is based on the test parameter variation comparison. According to the simulation model based on ray-tracing, the experiment is to verify the model precision by changing the types of devices, which are corresponding the parameters of the model. The experimental results show that the similarity between the imaging model based on ray tracing and the experimental image is 91.4%, which can simulate the remote sensing satellite imaging system very well.
Banić, Nikola; Lončarić, Sven
2015-11-01
Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed.
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Task-based statistical image reconstruction for high-quality cone-beam CT
NASA Astrophysics Data System (ADS)
Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.
2017-11-01
Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D; Kang, S; Kim, T
2014-06-01
Purpose: In this paper, we implemented the four-dimensional (4D) digital tomosynthesis (DTS) imaging based on algebraic image reconstruction technique and total-variation minimization method in order to compensate the undersampled projection data and improve the image quality. Methods: The projection data were acquired as supposed the cone-beam computed tomography system in linear accelerator by the Monte Carlo simulation and the in-house 4D digital phantom generation program. We performed 4D DTS based upon simultaneous algebraic reconstruction technique (SART) among the iterative image reconstruction technique and total-variation minimization method (TVMM). To verify the effectiveness of this reconstruction algorithm, we performed systematic simulation studiesmore » to investigate the imaging performance. Results: The 4D DTS algorithm based upon the SART and TVMM seems to give better results than that based upon the existing method, or filtered-backprojection. Conclusion: The advanced image reconstruction algorithm for the 4D DTS would be useful to validate each intra-fraction motion during radiation therapy. In addition, it will be possible to give advantage to real-time imaging for the adaptive radiation therapy. This research was supported by Leading Foreign Research Institute Recruitment Program (Grant No.2009-00420) and Basic Atomic Energy Research Institute (BAERI); (Grant No. 2009-0078390) through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP)« less
Wachman, Elliot S; Geyer, Stanley J; Recht, Joel M; Ward, Jon; Zhang, Bill; Reed, Murray; Pannell, Chris
2014-05-01
An acousto-optic tunable filter (AOTF)-based multispectral imaging microscope system allows the combination of cellular morphology and multiple biomarker stainings on a single microscope slide. We describe advances in AOTF technology that have greatly improved spectral purity, field uniformity, and image quality. A multispectral imaging bright field microscope using these advances demonstrates pathology results that have great potential for clinical use.
Using collective expert judgements to evaluate quality measures of mass spectrometry images.
Palmer, Andrew; Ovchinnikova, Ekaterina; Thuné, Mikael; Lavigne, Régis; Guével, Blandine; Dyatlov, Andrey; Vitek, Olga; Pineau, Charles; Borén, Mats; Alexandrov, Theodore
2015-06-15
Imaging mass spectrometry (IMS) is a maturating technique of molecular imaging. Confidence in the reproducible quality of IMS data is essential for its integration into routine use. However, the predominant method for assessing quality is visual examination, a time consuming, unstandardized and non-scalable approach. So far, the problem of assessing the quality has only been marginally addressed and existing measures do not account for the spatial information of IMS data. Importantly, no approach exists for unbiased evaluation of potential quality measures. We propose a novel approach for evaluating potential measures by creating a gold-standard set using collective expert judgements upon which we evaluated image-based measures. To produce a gold standard, we engaged 80 IMS experts, each to rate the relative quality between 52 pairs of ion images from MALDI-TOF IMS datasets of rat brain coronal sections. Experts' optional feedback on their expertise, the task and the survey showed that (i) they had diverse backgrounds and sufficient expertise, (ii) the task was properly understood, and (iii) the survey was comprehensible. A moderate inter-rater agreement was achieved with Krippendorff's alpha of 0.5. A gold-standard set of 634 pairs of images with accompanying ratings was constructed and showed a high agreement of 0.85. Eight families of potential measures with a range of parameters and statistical descriptors, giving 143 in total, were evaluated. Both signal-to-noise and spatial chaos-based measures performed highly with a correlation of 0.7 to 0.9 with the gold standard ratings. Moreover, we showed that a composite measure with the linear coefficients (trained on the gold standard with regularized least squares optimization and lasso) showed a strong linear correlation of 0.94 and an accuracy of 0.98 in predicting which image in a pair was of higher quality. The anonymized data collected from the survey and the Matlab source code for data processing can be found at: https://github.com/alexandrovteam/IMS_quality. © The Author 2015. Published by Oxford University Press.
Using collective expert judgements to evaluate quality measures of mass spectrometry images
Palmer, Andrew; Ovchinnikova, Ekaterina; Thuné, Mikael; Lavigne, Régis; Guével, Blandine; Dyatlov, Andrey; Vitek, Olga; Pineau, Charles; Borén, Mats; Alexandrov, Theodore
2015-01-01
Motivation: Imaging mass spectrometry (IMS) is a maturating technique of molecular imaging. Confidence in the reproducible quality of IMS data is essential for its integration into routine use. However, the predominant method for assessing quality is visual examination, a time consuming, unstandardized and non-scalable approach. So far, the problem of assessing the quality has only been marginally addressed and existing measures do not account for the spatial information of IMS data. Importantly, no approach exists for unbiased evaluation of potential quality measures. Results: We propose a novel approach for evaluating potential measures by creating a gold-standard set using collective expert judgements upon which we evaluated image-based measures. To produce a gold standard, we engaged 80 IMS experts, each to rate the relative quality between 52 pairs of ion images from MALDI-TOF IMS datasets of rat brain coronal sections. Experts’ optional feedback on their expertise, the task and the survey showed that (i) they had diverse backgrounds and sufficient expertise, (ii) the task was properly understood, and (iii) the survey was comprehensible. A moderate inter-rater agreement was achieved with Krippendorff’s alpha of 0.5. A gold-standard set of 634 pairs of images with accompanying ratings was constructed and showed a high agreement of 0.85. Eight families of potential measures with a range of parameters and statistical descriptors, giving 143 in total, were evaluated. Both signal-to-noise and spatial chaos-based measures performed highly with a correlation of 0.7 to 0.9 with the gold standard ratings. Moreover, we showed that a composite measure with the linear coefficients (trained on the gold standard with regularized least squares optimization and lasso) showed a strong linear correlation of 0.94 and an accuracy of 0.98 in predicting which image in a pair was of higher quality. Availability and implementation: The anonymized data collected from the survey and the Matlab source code for data processing can be found at: https://github.com/alexandrovteam/IMS_quality. Contact: theodore.alexandrov@embl.de PMID:26072506
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
Sparse representations via learned dictionaries for x-ray angiogram image denoising
NASA Astrophysics Data System (ADS)
Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu
2018-03-01
X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.
NASA Astrophysics Data System (ADS)
Ang, W. C.; Hashim, S.; Karim, M. K. A.; Bahruddin, N. A.; Salehhon, N.; Musa, Y.
2017-05-01
The widespread use of computed tomography (CT) has increased the medical radiation exposure and cancer risk. We aimed to evaluate the impact of AIDR 3D in CT abdomen-pelvic examinations based on image quality and radiation dose in low dose (LD) setting compared to standard dose (STD) with filtered back projection (FBP) reconstruction. We retrospectively reviewed the images of 40 patients who underwent CT abdomen-pelvic using a 80 slice CT scanner. Group 1 patients (n=20, mean age 41 ± 17 years) were performed at LD with AIDR 3D reconstruction and Group 2 patients (n=20, mean age 52 ± 21 years) were scanned with STD using FBP reconstruction. Objective image noise was assessed by region of interest (ROI) measurements in the liver and aorta as standard deviation (SD) of the attenuation value (Hounsfield Unit, HU) while subjective image quality was evaluated by two radiologists. Statistical analysis was used to compare the scan length, CT dose index volume (CTDIvol) and image quality of both patient groups. Although both groups have similar mean scan length, the CTDIvol significantly decreased by 38% in LD CT compared to STD CT (p<0.05). Objective and subjective image quality were statistically improved with AIDR 3D (p<0.05). In conclusion, AIDR 3D enables significant dose reduction of 38% with superior image quality in LD CT abdomen-pelvis.
Locally Enhanced Image Quality with Tunable Hybrid Metasurfaces
NASA Astrophysics Data System (ADS)
Shchelokova, Alena V.; Slobozhanyuk, Alexey P.; Melchakova, Irina V.; Glybovski, Stanislav B.; Webb, Andrew G.; Kivshar, Yuri S.; Belov, Pavel A.
2018-01-01
Metasurfaces represent a new paradigm in artificial subwavelength structures due to their potential to overcome many challenges typically associated with bulk metamaterials. The ability to make very thin structures and change their properties dynamically makes metasurfaces an exceptional meta-optics platform for engineering advanced electromagnetic and photonic metadevices. Here, we suggest and demonstrate experimentally a tunable metasurface capable of enhancing significantly the local image quality in magnetic resonance imaging. We present a design of the hybrid metasurface based on electromagnetically coupled dielectric and metallic elements. We demonstrate how to tailor the spectral characteristics of the metasurface eigenmodes by changing dynamically the effective permittivity of the structure. By maximizing a coupling between metasurface eigenmodes and transmitted and received fields in the magnetic resonance imaging (MRI) system, we enhance the device sensitivity that results in a substantial improvement of the image quality.
Design of a prototype tri-electrode ion-chamber for megavoltage X-ray imaging
NASA Astrophysics Data System (ADS)
Samant, Sanjiv S.; Gopal, Arun; Jain, Jinesh; Xia, Junyi; DiBianca, Frank A.
2007-04-01
High-energy (megavoltage) X-ray imaging is widely used in industry (e.g., aerospace, construction, material sciences) as well as in health care (radiation therapy). One of the fundamental problems with megavoltage imaging is poor contrast and spatial resolution in the detected images due to the dominance of Compton scattering at megavoltage X-ray energies. Therefore, although megavoltage X-rays can be used to image highly attenuating objects that cannot be imaged at kilovoltage energies, the former does not provide the high image quality that is associated with the latter. A high contrast and spatial resolution detector for high-energy X-ray fields called the kinestatic charge detector (KCD) is presented here. The KCD is a tri-electrode ion-chamber based on highly pressurized noble gas. The KCD operates in conjunction with a strip-collimated X-ray beam (for high scatter rejection) to scan across the imaging field. Its thick detector design and unique operating principle provides enhanced charge signal integration for high quality imaging (quantum efficiency ˜50%) despite the unfavorable implications of high-energy X-ray interactions on image quality. The proposed design for a large-field prototype KCD includes a cylindrical pressure chamber along with 576 signal-collecting electrodes capable of resolving at 2 mm -1. The collecting electrodes are routed out of the chamber through the flat end-cap, thereby optimizing the mechanical strength of the chamber. This article highlights the simplified design of the chamber using minimal components for simple assembly. In addition, fundamental imaging measurements and estimates of ion recombination that were performed on a proof-of-principle test chamber are presented. The imaging performance of the prototype KCD was found to be an order-of-magnitude greater than commercial phosphor screen based flat-panel systems, demonstrating the potential for high-quality megavoltage imaging for a variety of industrial applications.
Goal-oriented evaluation of binarization algorithms for historical document images
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady
2013-01-01
Binarization is of significant importance in document analysis systems. It is an essential first step, prior to further stages such as Optical Character Recognition (OCR), document segmentation, or enhancement of readability of the document after some restoration stages. Hence, proper evaluation of binarization methods to verify their effectiveness is of great value to the document analysis community. In this work, we perform a detailed goal-oriented evaluation of image quality assessment of the 18 binarization methods that participated in the DIBCO 2011 competition using the 16 historical document test images used in the contest. We are interested in the image quality assessment of the outputs generated by the different binarization algorithms as well as the OCR performance, where possible. We compare our evaluation of the algorithms based on human perception of quality to the DIBCO evaluation metrics. The results obtained provide an insight into the effectiveness of these methods with respect to human perception of image quality as well as OCR performance.
Application of Oversampling to obtain the MTF of Digital Radiology Equipment.
NASA Astrophysics Data System (ADS)
Narváez, M.; Graffigna, J. P.; Gómez, M. E.; Romo, R.
2016-04-01
Within the objectives of theproject Medical Image Processing for QualityAssessment ofX Ray Imaging, the present research work is aimed at developinga phantomX ray image and itsassociated processing algorithms in order to evaluatethe image quality rendered by digital X ray equipment. These tools are used to measure various image parameters, among which spatial resolution shows afundamental property that can be characterized by the Modulation Transfer Function (MTF)of an imaging system [1]. After performing a thorough literature surveyon imaging quality control in digital X film in Argentine and international publications, it was decided to adopt for this work the Norm IEC 62220 1:2003 that recommends using an image edge as a testingmethod. In order to obtain the characterizing MTF, a protocol was designedfor unifying the conditions under which the images are acquired for later evaluation. The protocol implied acquiring a radiography image by means of a specific referential technique, i.e. referred either to voltage, current, time, distance focus plate (/film?) distance, or other referential parameter, and to interpret the image through a system of computed radiology or direct digital radiology. The contribution of the work stems from the fact that, even though the traditional way of evaluating an X film image quality has relied mostly on subjective methods, this work presents an objective evaluative toolfor the images obtained with a givenequipment, followed by a contrastive analysis with the renderings from other X filmimaging sets.Once the images were obtained, specific calculations were carried out. Though there exist some methods based on the subjective evaluation of the quality of image, this work offers an objective evaluation of the equipment under study. Finally, we present the results obtained on different equipment.
Low-dose CT reconstruction with patch based sparsity and similarity constraints
NASA Astrophysics Data System (ADS)
Xu, Qiong; Mou, Xuanqin
2014-03-01
As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.
Color dithering methods for LEGO-like 3D printing
NASA Astrophysics Data System (ADS)
Sun, Pei-Li; Sie, Yuping
2015-01-01
Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.
The Quality of In Vivo Upconversion Fluorescence Signals Inside Different Anatomic Structures.
Wang, Lijiang; Draz, Mohamed Shehata; Wang, Wei; Liao, Guodong; Xu, Yuhong
2015-02-01
Fluorescence imaging is a broadly interesting and rapidly growing strategy for non-invasive clinical applications. However, because of interference from light scattering, absorbance, and tissue autofluorescence, the images can exhibit low sensitivity and poor quality. Upconversion fluorescence imaging, which is based on the use of near-infrared (NIR) light for excitation, has recently been introduced as an improved approach to minimize the effects of light scattering and tissue autofluorescence. This strategy is promising for ultrasensitive and deep tissue imaging applications. However, the emitted upconversion fluorescence signals are primarily in the visible range and are likely to be absorbed and scattered by tissues. Therefore, different anatomic structures could impose various effects on the quality of the images. In this study, we used upconversion-core/silica-shell nanoprobes to evaluate the quality of upconversion fluorescence at different anatomic locations in athymic nude mice. The nanoprobe contained an upconversion core, which was green (β-NaYF4:Yb3+/Ho3+) or red (β-NaYF4:Yb3+/Er3+), and a nonporous silica shell to allow for multicolor imaging. High-quality upconversion fluorescence signals were detected with signal-to-noise ratios of up to 170 at tissue depths of up to - 1.0 cm when a 980 nm laser excitation source and a bandpass emission filter were used. The presence of dense tissue structures along the imaging path reduced the signal intensity and imaging quality, and nanoprobes with longer-wavelength emission spectra were therefore preferable. This study offers a detailed analysis of the quality of upconversion signals in vivo inside different anatomic structures. Such information could be essential for the analysis of upconversion fluorescence images in any in vivo biodiagnostic and microbial tracking applications.
A review of image quality assessment methods with application to computational photography
NASA Astrophysics Data System (ADS)
Maître, Henri
2015-12-01
Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.
Infrared image enhancement using H(infinity) bounds for surveillance applications.
Qidwai, Uvais
2008-08-01
In this paper, two algorithms have been presented to enhance the infrared (IR) images. Using the autoregressive moving average model structure and H(infinity) optimal bounds, the image pixels are mapped from the IR pixel space into normal optical image space, thus enhancing the IR image for improved visual quality. Although H(infinity)-based system identification algorithms are very common now, they are not quite suitable for real-time applications owing to their complexity. However, many variants of such algorithms are possible that can overcome this constraint. Two such algorithms have been developed and implemented in this paper. Theoretical and algorithmic results show remarkable enhancement in the acquired images. This will help in enhancing the visual quality of IR images for surveillance applications.
90Y Liver Radioembolization Imaging Using Amplitude-Based Gated PET/CT.
Osborne, Dustin R; Acuff, Shelley; Neveu, Melissa; Kaman, Austin; Syed, Mumtaz; Fu, Yitong
2017-05-01
The usage of PET/CT to monitor patients with hepatocellular carcinoma following Y radioembolization has increased; however, image quality is often poor because of low count efficiency and respiratory motion. Motion can be corrected using gating techniques but at the expense of additional image noise. Amplitude-based gating has been shown to improve quantification in FDG PET, but few have used this technique in Y liver imaging. The patients shown in this work indicate that amplitude-based gating can be used in Y PET/CT liver imaging to provide motion-corrected images with higher estimates of activity concentration that may improve posttherapy dosimetry.
Quality grading of Atlantic salmon (Salmo salar) by computer vision.
Misimi, E; Erikson, U; Skavhaug, A
2008-06-01
In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.
A knowledge-based framework for image enhancement in aviation security.
Singh, Maneesha; Singh, Sameer; Partridge, Derek
2004-12-01
The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.
NASA Technical Reports Server (NTRS)
Blackwell, R. J.
1982-01-01
Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.
Pickhardt, Perry J; Lubner, Meghan G; Kim, David H; Tang, Jie; Ruma, Julie A; del Rio, Alejandro Muñoz; Chen, Guang-Hong
2012-12-01
The purpose of this study was to report preliminary results of an ongoing prospective trial of ultralow-dose abdominal MDCT. Imaging with standard-dose contrast-enhanced (n = 21) and unenhanced (n = 24) clinical abdominal MDCT protocols was immediately followed by ultralow-dose imaging of a matched series of 45 consecutively registered adults (mean age, 57.9 years; mean body mass index, 28.5). The ultralow-dose images were reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Standard-dose series were reconstructed with FBP (reference standard). Image noise was measured at multiple predefined sites. Two blinded abdominal radiologists interpreted randomly presented ultralow-dose images for multilevel subjective image quality (5-point scale) and depiction of organ-based focal lesions. Mean dose reduction relative to the standard series was 74% (median, 78%; range, 57-88%; mean effective dose, 1.90 mSv). Mean multiorgan image noise for low-dose MBIR was 14.7 ± 2.6 HU, significantly lower than standard-dose FBP (28.9 ± 9.9 HU), low-dose FBP (59.2 ± 23.3 HU), and ASIR (45.6 ± 14.1 HU) (p < 0.001). The mean subjective image quality score for low-dose MBIR (3.0 ± 0.5) was significantly higher than for low-dose FBP (1.6 ± 0.7) and ASIR (1.8 ± 0.7) (p < 0.001). Readers identified 213 focal noncalcific lesions with standard-dose FBP. Pooled lesion detection was higher for low-dose MBIR (79.3% [169/213]) compared with low-dose FBP (66.2% [141/213]) and ASIR (62.0% [132/213]) (p < 0.05). MBIR shows great potential for substantially reducing radiation doses at routine abdominal CT. Both FBP and ASIR are limited in this regard owing to reduced image quality and diagnostic capability. Further investigation is needed to determine the optimal dose level for MBIR that maintains adequate diagnostic performance. In general, objective and subjective image quality measurements do not necessarily correlate with diagnostic performance at ultralow-dose CT.
Real-Time Internet Connections: Implications for Surgical Decision Making in Laparoscopy
Broderick, Timothy J.; Harnett, Brett M.; Doarn, Charles R.; Rodas, Edgar B.; Merrell, Ronald C.
2001-01-01
Objective To determine whether a low-bandwidth Internet connection can provide adequate image quality to support remote real-time surgical consultation. Summary Background Data Telemedicine has been used to support care at a distance through the use of expensive equipment and broadband communication links. In the past, the operating room has been an isolated environment that has been relatively inaccessible for real-time consultation. Recent technological advances have permitted videoconferencing over low-bandwidth, inexpensive Internet connections. If these connections are shown to provide adequate video quality for surgical applications, low-bandwidth telemedicine will open the operating room environment to remote real-time surgical consultation. Methods Surgeons performing a laparoscopic cholecystectomy in Ecuador or the Dominican Republic shared real-time laparoscopic images with a panel of surgeons at the parent university through a dial-up Internet account. The connection permitted video and audio teleconferencing to support real-time consultation as well as the transmission of real-time images and store-and-forward images for observation by the consultant panel. A total of six live consultations were analyzed. In addition, paired local and remote images were “grabbed” from the video feed during these laparoscopic cholecystectomies. Nine of these paired images were then placed into a Web-based tool designed to evaluate the effect of transmission on image quality. Results The authors showed for the first time the ability to identify critical anatomic structures in laparoscopy over a low-bandwidth connection via the Internet. The consultant panel of surgeons correctly remotely identified biliary and arterial anatomy during six laparoscopic cholecystectomies. Within the Web-based questionnaire, 15 surgeons could not blindly distinguish the quality of local and remote laparoscopic images. Conclusions Low-bandwidth, Internet-based telemedicine is inexpensive, effective, and almost ubiquitous. Use of these inexpensive, portable technologies will allow sharing of surgical procedures and decisions regardless of location. Internet telemedicine consistently supported real-time intraoperative consultation in laparoscopic surgery. The implications are broad with respect to quality improvement and diffusion of knowledge as well as for basic consultation. PMID:11505061
Multiple Image Arrangement for Subjective Quality Assessment
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhai, Guangtao
2017-12-01
Subjective quality assessment serves as the foundation for almost all visual quality related researches. Size of the image quality databases has expanded from dozens to thousands in the last decades. Since each subjective rating therein has to be averaged over quite a few participants, the ever-increasing overall size of those databases calls for an evolution of existing subjective test methods. Traditional single/double stimulus based approaches are being replaced by multiple image tests, where several distorted versions of the original one are displayed and rated at once. And this naturally brings upon the question of how to arrange those multiple images on screen during the test. In this paper, we answer this question by performing subjective viewing test with eye tracker for different types arrangements. Our research indicates that isometric arrangement imposes less duress on participants and has more uniform distribution of eye fixations and movements and therefore is expected to generate more reliable subjective ratings.
NASA Astrophysics Data System (ADS)
Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina
2006-08-01
This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.
Boissin, Constance; Blom, Lisa; Wallis, Lee; Laflamme, Lucie
2017-02-01
Mobile health has promising potential in improving healthcare delivery by facilitating access to expert advice. Enabling experts to review images on their smartphone or tablet may save valuable time. This study aims at assessing whether images viewed by medical specialists on handheld devices such as smartphones and tablets are perceived to be of comparable quality as when viewed on a computer screen. This was a prospective study comparing the perceived quality of 18 images on three different display devices (smartphone, tablet and computer) by 27 participants (4 burn surgeons and 23 emergency medicine specialists). The images, presented in random order, covered clinical (dermatological conditions, burns, ECGs and X-rays) and non-clinical subjects and their perceived quality was assessed using a 7-point Likert scale. Differences in devices' quality ratings were analysed using linear regression models for clustered data adjusting for image type and participants' characteristics (age, gender and medical specialty). Overall, the images were rated good or very good in most instances and more so for the smartphone (83.1%, mean score 5.7) and tablet (78.2%, mean 5.5) than for a standard computer (70.6%, mean 5.2). Both handheld devices had significantly higher ratings than the computer screen, even after controlling for image type and participants' characteristics. Nearly all experts expressed that they would be comfortable using smartphones (n=25) or tablets (n=26) for image-based teleconsultation. This study suggests that handheld devices could be a substitute for computer screens for teleconsultation by physicians working in emergency settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.
Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu
2015-05-18
We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.
Image quality assessment for teledermatology: from consumer devices to a dedicated medical device
NASA Astrophysics Data System (ADS)
Amouroux, Marine; Le Cunff, Sébastien; Haudrechy, Alexandre; Blondel, Walter
2017-03-01
Aging population as well as growing incidence of type 2 diabetes induce a growing incidence of chronic skin disorders. In the meantime, chronic shortage of dermatologists leaves some areas underserved. Remote triage and assistance to homecare nurses (known as "teledermatology") appear to be promising solutions to provide dermatological valuation in a decent time to patients wherever they live. Nowadays, teledermatology is often based on consumer devices (digital tablets, smartphones, webcams) whose photobiological and electrical safety levels do not match with medical devices' levels. The American Telemedicine Association (ATA) has published recommendations on quality standards for teledermatology. This "quick guide" does not address the issue of image quality which is critical in domestic environments where lighting is rarely reproducible. Standardized approaches of image quality would allow clinical trial comparison, calibration, manufacturing quality control and quality insurance during clinical use. Therefore, we defined several critical metrics using calibration charts (color and resolution charts) in order to assess image quality such as resolution, lighting uniformity, color repeatability and discrimination of key couples of colors. Using such metrics, we compared quality of images produced by several medical devices (handheld and video-dermoscopes) as well as by consumer devices (digital tablet and cameras) widely spread among dermatologists practice. Since diagnosis accuracy may be impaired by "low quality-images", this study highlights that, from an optical point of view, teledermatology should only be performed using medical devices. Furthermore, a dedicated medical device should probably be developed for the time follow-up of skin lesions often managed in teledermatology such as chronic wounds that require i) noncontact imaging of ii) large areas of skin surfaces, both criteria that cannot be matched using dermoscopes.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
Sparsity-based multi-height phase recovery in holographic microscopy
NASA Astrophysics Data System (ADS)
Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan
2016-11-01
High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.
Bannas, Peter; Li, Yinsheng; Motosugi, Utaroh; Li, Ke; Lubner, Meghan; Chen, Guang-Hong; Pickhardt, Perry J
2016-07-01
To assess the effect of the prior-image-constrained-compressed-sensing-based metal-artefact-reduction (PICCS-MAR) algorithm on streak artefact reduction and 2D and 3D-image quality improvement in patients with total hip arthroplasty (THA) undergoing CT colonography (CTC). PICCS-MAR was applied to filtered-back-projection (FBP)-reconstructed DICOM CTC-images in 52 patients with THA (unilateral, n = 30; bilateral, n = 22). For FBP and PICCS-MAR series, ROI-measurements of CT-numbers were obtained at predefined levels for fat, muscle, air, and the most severe artefact. Two radiologists independently reviewed 2D and 3D CTC-images and graded artefacts and image quality using a five-point-scale (1 = severe streak/no-diagnostic confidence, 5 = no streak/excellent image-quality, high-confidence). Results were compared using paired and unpaired t-tests and Wilcoxon signed-rank and Mann-Whitney-tests. Streak artefacts and image quality scores for FBP versus PICCS-MAR 2D-images (median: 1 vs. 3 and 2 vs. 3, respectively) and 3D images (median: 2 vs. 4 and 3 vs. 4, respectively) showed significant improvement after PICCS-MAR (all P < 0.001). PICCS-MAR significantly improved the accuracy of mean CT numbers for fat, muscle and the area with the most severe artefact (all P < 0.001). PICCS-MAR substantially reduces streak artefacts related to THA on DICOM images, thereby enhancing visualization of anatomy on 2D and 3D CTC images and increasing diagnostic confidence. • PICCS-MAR significantly reduces streak artefacts associated with total hip arthroplasty on 2D and 3D CTC. • PICCS-MAR significantly improves 2D and 3D CTC image quality and diagnostic confidence. • PICCS-MAR can be applied retrospectively to DICOM images from single-kVp CT.
High-quality infrared imaging with graphene photodetectors at room temperature.
Guo, Nan; Hu, Weida; Jiang, Tao; Gong, Fan; Luo, Wenjin; Qiu, Weicheng; Wang, Peng; Liu, Lu; Wu, Shiwei; Liao, Lei; Chen, Xiaoshuang; Lu, Wei
2016-09-21
Graphene, a two-dimensional material, is expected to enable broad-spectrum and high-speed photodetection because of its gapless band structure, ultrafast carrier dynamics and high mobility. We demonstrate a multispectral active infrared imaging by using a graphene photodetector based on hybrid response mechanisms at room temperature. The high-quality images with optical resolutions of 418 nm, 657 nm and 877 nm and close-to-theoretical-limit Michelson contrasts of 0.997, 0.994, and 0.996 have been acquired for 565 nm, 1550 nm, and 1815 nm light imaging measurements by using an unbiased graphene photodetector, respectively. Importantly, by carefully analyzing the results of Raman mapping and numerical simulations for the response process, the formation of hybrid photocurrents in graphene detectors is attributed to the synergistic action of photovoltaic and photo-thermoelectric effects. The initial application to infrared imaging will help promote the development of high performance graphene-based infrared multispectral detectors.
Automated Construction of Coverage Catalogues of Aster Satellite Image for Urban Areas of the World
NASA Astrophysics Data System (ADS)
Miyazaki, H.; Iwao, K.; Shibasaki, R.
2012-07-01
We developed an algorithm to determine a combination of satellite images according to observation extent and image quality. The algorithm was for testing necessity for completing coverage of the search extent. The tests excluded unnecessary images with low quality and preserve necessary images with good quality. The search conditions of the satellite images could be extended, indicating the catalogue could be constructed with specified periods required for time series analysis. We applied the method to a database of metadata of ASTER satellite images archived in GEO Grid of National Institute of Advanced Industrial Science and Technology (AIST), Japan. As indexes of populated places with geographical coordinates, we used a database of 3372 populated place of more than 0.1 million populations retrieved from GRUMP Settlement Points, a global gazetteer of cities, which has geographical names of populated places associated with geographical coordinates and population data. From the coordinates of populated places, 3372 extents were generated with radiuses of 30 km, a half of swath of ASTER satellite images. By merging extents overlapping each other, they were assembled into 2214 extents. As a result, we acquired combinations of good quality for 1244 extents, those of low quality for 96 extents, incomplete combinations for 611 extents. Further improvements would be expected by introducing pixel-based cloud assessment and pixel value correction over seasonal variations.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Pan, Zilan; Liang, Dong; Ma, Xiuhua; Zhang, Dawei
2015-12-01
An optical encryption method based on compressive ghost imaging (CGI) with double random-phase encoding (DRPE), named DRPE-CGI, is proposed. The information is first encrypted by the sender with DRPE, the DRPE-coded image is encrypted by the system of computational ghost imaging with a secret key. The key of N random-phase vectors is generated by the sender and will be shared with the receiver who is the authorized user. The receiver decrypts the DRPE-coded image with the key, with the aid of CGI and a compressive sensing technique, and then reconstructs the original information by the technique of DRPE-decoding. The experiments suggest that cryptanalysts cannot get any useful information about the original image even if they eavesdrop 60% of the key at a given time, so the security of DRPE-CGI is higher than that of the security of conventional ghost imaging. Furthermore, this method can reduce 40% of the information quantity compared with ghost imaging while the qualities of reconstructing the information are the same. It can also improve the quality of the reconstructed plaintext information compared with DRPE-GI with the same sampling times. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, J; Zhang, L; Samei, E
Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less
2011-05-01
for the research in the next year. The aims in the next year include further develop- ment of the prior image- based , narrowly collimated CBCT imaging...further investigation planned for the next year. 5 BODY 1 Research Accomplishments 1.1 Implement narrow beam collimation for CBCT ROI imaging I have...noise level to mimic different mAs used in clinical and research modes of the CBCT system. Based upon experiences with the numerical phantom, I designed
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Tight-frame based iterative image reconstruction for spectral breast CT
Zhao, Bo; Gao, Hao; Ding, Huanjun; Molloi, Sabee
2013-01-01
Purpose: To investigate tight-frame based iterative reconstruction (TFIR) technique for spectral breast computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The experimental data were acquired with a fan-beam breast CT system based on a cadmium zinc telluride photon-counting detector. The images were reconstructed with a varying number of projections using the TFIR and filtered backprojection (FBP) techniques. The image quality between these two techniques was evaluated. The image's spatial resolution was evaluated using a high-resolution phantom, and the contrast to noise ratio (CNR) was evaluated using a postmortem breast sample. The postmortem breast samples were decomposed into water, lipid, and protein contents based on images reconstructed from TFIR with 204 projections and FBP with 614 projections. The volumetric fractions of water, lipid, and protein from the image-based measurements in both TFIR and FBP were compared to the chemical analysis. Results: The spatial resolution and CNR were comparable for the images reconstructed by TFIR with 204 projections and FBP with 614 projections. Both reconstruction techniques provided accurate quantification of water, lipid, and protein composition of the breast tissue when compared with data from the reference standard chemical analysis. Conclusions: Accurate breast tissue decomposition can be done with three fold fewer projection images by the TFIR technique without any reduction in image spatial resolution and CNR. This can result in a two-third reduction of the patient dose in a multislit and multislice spiral CT system in addition to the reduced scanning time in this system. PMID:23464320
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-11-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-01-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
Memory preservation made prestigious but easy
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Debus, Christina; Sandhaus, Philipp
2011-01-01
Preserving memories combined with story-telling using either photo books for multiple images or high quality products such as one or a few images printed on canvas or images mounted on acryl to create high-quality wall decorations are gradually becoming more popular than classical 4*6 prints and classical silver halide posters. Digital printing via electro photography and ink jet is increasingly replacing classical silver halide technology as the dominant production technology for these kinds of products. Maintaining a consistent and comparable quality of output is becoming more challenging than using silver halide paper for both, prints and posters. This paper describes a unique approach of combining both desktop based software to initiate a compelling project and the use of online capabilities in order to finalize and optimize that project in an online environment in a community process. A comparison of the consumer behavior between online and desktop based solutions for generating photo books will be presented.
Digital Light Processing update: status and future applications
NASA Astrophysics Data System (ADS)
Hornbeck, Larry J.
1999-05-01
Digital Light Processing (DLP) projection displays based on the Digital Micromirror Device (DMD) were introduced to the market in 1996. Less than 3 years later, DLP-based projectors are found in such diverse applications as mobile, conference room, video wall, home theater, and large-venue. They provide high-quality, seamless, all-digital images that have exceptional stability as well as freedom from both flicker and image lag. Marked improvements have been made in the image quality of DLP-based projection display, including brightness, resolution, contrast ratio, and border image. DLP-based mobile projectors that weighted about 27 pounds in 1996 now weight only about 7 pounds. This weight reduction has been responsible for the definition of an entirely new projector class, the ultraportable. New applications are being developed for this important new projection display technology; these include digital photofinishing for high process speed minilab and maxilab applications and DLP Cinema for the digital delivery of films to audiences around the world. This paper describes the status of DLP-based projection display technology, including its manufacturing, performance improvements, and new applications, with emphasis on DLP Cinema.
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
Bueno, Juan M; Skorsetz, Martin; Palacios, Raquel; Gualda, Emilio J; Artal, Pablo
2014-01-01
Despite the inherent confocality and optical sectioning capabilities of multiphoton microscopy, three-dimensional (3-D) imaging of thick samples is limited by the specimen-induced aberrations. The combination of immersion objectives and sensorless adaptive optics (AO) techniques has been suggested to overcome this difficulty. However, a complex plane-by-plane correction of aberrations is required, and its performance depends on a set of image-based merit functions. We propose here an alternative approach to increase penetration depth in 3-D multiphoton microscopy imaging. It is based on the manipulation of the spherical aberration (SA) of the incident beam with an AO device while performing fast tomographic multiphoton imaging. When inducing SA, the image quality at best focus is reduced; however, better quality images are obtained from deeper planes within the sample. This is a compromise that enables registration of improved 3-D multiphoton images using nonimmersion objectives. Examples on ocular tissues and nonbiological samples providing different types of nonlinear signal are presented. The implementation of this technique in a future clinical instrument might provide a better visualization of corneal structures in living eyes.
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.
Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun
2014-07-01
4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.; Guld, Mark O.; Thies, Christian; Fischer, Benedikt; Keysers, Daniel; Kohnen, Michael; Schubert, Henning; Wein, Berthold B.
2003-05-01
Picture archiving and communication systems (PACS) aim to efficiently provide the radiologists with all images in a suitable quality for diagnosis. Modern standards for digital imaging and communication in medicine (DICOM) comprise alphanumerical descriptions of study, patient, and technical parameters. Currently, this is the only information used to select relevant images within PACS. Since textual descriptions insufficiently describe the great variety of details in medical images, content-based image retrieval (CBIR) is expected to have a strong impact when integrated into PACS. However, existing CBIR approaches usually are limited to a distinct modality, organ, or diagnostic study. In this state-of-the-art report, we present first results implementing a general approach to content-based image retrieval in medical applications (IRMA) and discuss its integration into PACS environments. Usually, a PACS consists of a DICOM image server and several DICOM-compliant workstations, which are used by radiologists for reading the images and reporting the findings. Basic IRMA components are the relational database, the scheduler, and the web server, which all may be installed on the DICOM image server, and the IRMA daemons running on distributed machines, e.g., the radiologists" workstations. These workstations can also host the web-based front-ends of IRMA applications. Integrating CBIR and PACS, a special focus is put on (a) location and access transparency for data, methods, and experiments, (b) replication transparency for methods in development, (c) concurrency transparency for job processing and feature extraction, (d) system transparency at method implementation time, and (e) job distribution transparency when issuing a query. Transparent integration will have a certain impact on diagnostic quality supporting both evidence-based medicine and case-based reasoning.
Wang, Xiao-Ping; Zhu, Xiao-Mei; Zhu, Yin-Su; Liu, Wang-Yan; Yang, Xiao-Han; Huang, Wei-Wei; Xu, Yi; Tang, Li-Jun
2018-07-01
The present study included a total of 111 consecutive patients who had undergone coronary computed tomography (CT) angiography, using a first-generation dual-source CT with automatic tube potential selection and tube current modulation. Body weight (BW) and body mass index (BMI) were recorded prior to CT examinations. Image noise and attenuation of the proximal ascending aorta (AA) and descending aorta (DA) at the middle level of the left ventricle were measured. Correlations between BW, BMI and objective image quality were evaluated using linear regression. In addition, two subgroups based on BMI (BMI ≤25 and >25 kg/m 2 ) were analyzed. Subjective image quality, image noise, the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) were all compared between those. The image noise of the AA increased with the BW and BMI (BW: r=0.453, P<0.001; BMI: r=0.545, P<0.001). The CNR and SNR of the AA were inversely correlated with BW and BMI, respectively. The image noise of the DA and the CNR and SNR of the DA exhibited a similar association to those with the BW or BMI. The BMI >25 kg/m 2 group had a significant increase in image noise (33.1±6.9 vs. 27.8±4.0 HU, P<0.05) and a significant reduction in CNR and SNR, when compared with those in the BMI ≤25 kg/m 2 group (CNR: 18.9±4.3 vs. 16.1±3.7, P<0.05; SNR: 16.0±3.8 vs. 13.6±3.2, P<0.05). Patients with a BMI of ≤25 kg/m 2 had more coronary artery segments scored as excellent, compared with patients with a BMI of >25 kg/m 2 (P=0.02). In conclusion, this method is not able to achieve a consistent objective image quality across the entire patient population. The impact of BW and BMI on objective image quality was not completely eliminated. BMI-based adjustment of the tube potential may achieve a more consistent image quality compared to automatic tube potential selection, particularly in patients with a larger body habitus.
In-situ quality monitoring during laser brazing
NASA Astrophysics Data System (ADS)
Ungers, Michael; Fecker, Daniel; Frank, Sascha; Donst, Dmitri; Märgner, Volker; Abels, Peter; Kaierle, Stefan
Laser brazing of zinc coated steel is a widely established manufacturing process in the automotive sector, where high quality requirements must be fulfilled. The strength, impermeablitiy and surface appearance of the joint are particularly important for judging its quality. The development of an on-line quality control system is highly desired by the industry. This paper presents recent works on the development of such a system, which consists of two cameras operating in different spectral ranges. For the evaluation of the system, seam imperfections are created artificially during experiments. Finally image processing algorithms for monitoring process parameters based the captured images are presented.
Bayesian denoising in digital radiography: a comparison in the dental field.
Frosio, I; Olivieri, C; Lucchese, M; Borghese, N A; Boccacci, P
2013-01-01
We compared two Bayesian denoising algorithms for digital radiographs, based on Total Variation regularization and wavelet decomposition. The comparison was performed on simulated radiographs with different photon counts and frequency content and on real dental radiographs. Four different quality indices were considered to quantify the quality of the filtered radiographs. The experimental results suggested that Total Variation is more suited to preserve fine anatomical details, whereas wavelets produce images of higher quality at global scale; they also highlighted the need for more reliable image quality indices. Copyright © 2012 Elsevier Ltd. All rights reserved.
Vos, Eline K; Sambandamurthy, Sriram; Kamel, Maged; McKenney, Robert; van Uden, Mark J; Hoeks, Caroline M A; Yakar, Derya; Scheenen, Tom W J; Fütterer, Jurgen J
2014-01-01
The objectives of this study were to test the feasibility of an investigational dual-channel next-generation endorectal coil (NG-ERC) in vivo, to quantitatively assess signal-to-noise ratio (SNR), and to get an impression of image quality compared with the current clinically available single-loop endorectal coil (ERC) for prostate magnetic resonance imaging at both 1.5 and 3 T. The study was approved by the institutional review board, and written informed consent was obtained from all patients. In total, 8 consecutive patients with prostate cancer underwent a local staging magnetic resonance examination with the successive use of both coils in 1 session (4 patients at 1.5 T and 4 other patients at 3 T). Quantitative comparison of both coils was performed for the apex, mid-gland and base levels at both field strengths by calculating SNR profiles in the axial plane on an imaginary line in the anteroposterior direction perpendicular to the coil surface. Two radiologists independently assessed the image quality of the T2-weighted and apparent diffusion coefficient maps calculated from diffusion-weighted imaging using a 5-point scale. Improvement of geometric distortion on diffusion-weighted imaging with the use of parallel imaging was explored. Statistical analysis included a paired Wilcoxon signed rank test for SNR and image quality evaluation as well as κ statistics for interobserver agreement. No adverse events were reported. The SNR was higher for the NG-ERC compared with the ERC up to a distance of approximately 40 mm from the surface of the coil at 1.5 T (P < 0.0001 for the apex, the mid-gland, and the base) and approximately 17 mm (P = 0.015 at the apex level) and 30 mm at 3 T (P < 0.0001 for the mid-gland and base). Beyond this distance, the SNR profiles of both coils were comparable. Overall, T2-weighted image quality was considered better for NG-ERC at both field strengths. Quality of apparent diffusion coefficient maps with the use of parallel imaging was rated superior with the NG-ERC at 3 T. The investigational NG-ERC for prostate imaging outperforms the current clinically available ERC in terms of SNR and is feasible for continued development for future use as the next generation endorectal coil for prostate imaging in clinical practice.
Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2014-12-01
Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.
Printing system perceptual-based gloss and gloss uniformity standard (INCITS W1.1)
NASA Astrophysics Data System (ADS)
Ng, Yee S.; Cui, Luke C.; Kuo, Chung-Hui; Maggard, Eric; Mashtare, Dale; Morris, Peter; Viola, Michael
2003-12-01
To address the standarization issues of perceptually based image quality for printing systems, ISO/IEC JTC1/SC28, the standarization committee for office equipment charactered the W1.1 project with the responsibiltiy of drafting a proposal for an international standard for the evaluation of printed image quality. One of the W1.1 task teams is charactered to address the issue of 'Gloss and Gloss Uniformity". This paper summarizes the current status and technical progress of this ad hoc team in 2003.
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Koppers, Lars; Wormer, Holger; Ickstadt, Katja
2017-08-01
The quality and authenticity of images is essential for data presentation, especially in the life sciences. Questionable images may often be a first indicator for questionable results, too. Therefore, a tool that uses mathematical methods to detect suspicious images in large image archives can be a helpful instrument to improve quality assurance in publications. As a first step towards a systematic screening tool, especially for journal editors and other staff members who are responsible for quality assurance, such as laboratory supervisors, we propose a basic classification of image manipulation. Based on this classification, we developed and explored some simple algorithms to detect copied areas in images. Using an artificial image and two examples of previously published modified images, we apply quantitative methods such as pixel-wise comparison, a nearest neighbor and a variance algorithm to detect copied-and-pasted areas or duplicated images. We show that our algorithms are able to detect some simple types of image alteration, such as copying and pasting background areas. The variance algorithm detects not only identical, but also very similar areas that differ only by brightness. Further types could, in principle, be implemented in a standardized scanning routine. We detected the copied areas in a proven case of image manipulation in Germany and showed the similarity of two images in a retracted paper from the Kato labs, which has been widely discussed on sites such as pubpeer and retraction watch.
USDA-ARS?s Scientific Manuscript database
Line-scan-based hyperspectral imaging techniques have often served as a research tool to develop rapid multispectral methods based on only a few spectral bands for rapid online applications. With continuing technological advances and greater accessibility to and availability of optoelectronic imagin...
Imaging through atmospheric turbulence for laser based C-RAM systems: an analytical approach
NASA Astrophysics Data System (ADS)
Buske, Ivo; Riede, Wolfgang; Zoz, Jürgen
2013-10-01
High Energy Laser weapons (HEL) have unique attributes which distinguish them from limitations of kinetic energy weapons. HEL weapons engagement process typical starts with identifying the target and selecting the aim point on the target through a high magnification telescope. One scenario for such a HEL system is the countermeasure against rockets, artillery or mortar (RAM) objects to protect ships, camps or other infrastructure from terrorist attacks. For target identification and especially to resolve the aim point it is significant to ensure high resolution imaging of RAM objects. During the whole ballistic flight phase the knowledge about the expectable imaging quality is important to estimate and evaluate the countermeasure system performance. Hereby image quality is mainly influenced by unavoidable atmospheric turbulence. Analytical calculations have been taken to analyze and evaluate image quality parameters during an approaching RAM object. In general, Kolmogorov turbulence theory was implemented to determine atmospheric coherence length and isoplanatic angle. The image acquisition is distinguishing between long and short exposure times to characterize tip/tilt image shift and the impact of high order turbulence fluctuations. Two different observer positions are considered to show the influence of the selected sensor site. Furthermore two different turbulence strengths are investigated to point out the effect of climate or weather condition. It is well known that atmospheric turbulence degenerates image sharpness and creates blurred images. Investigations are done to estimate the effectiveness of simple tip/tilt systems or low order adaptive optics for laser based C-RAM systems.
A Comparison of Image Quality and Radiation Exposure Between the Mini C-Arm and the Standard C-Arm.
van Rappard, Juliaan R M; Hummel, Willy A; de Jong, Tijmen; Mouës, Chantal M
2018-04-01
The use of intraoperative fluoroscopy has become mandatory in osseous hand surgery. Due to its overall practicality, the mini C-arm has gained popularity among hand surgeons over the standard C-arm. This study compares image quality and radiation exposure for patient and staff between the mini C-arm and the standard C-arm, both with flat panel technology. An observer-based subjective image quality study was performed using a contrast detail (CD) phantom. Five independent observers were asked to determine the smallest circles discernable to them. The results were plotted in a graph, forming a CD curve. From each curve, an image quality figure (IQF) was derived. A lower IQF equates to a better image quality. The patients' entrance skin dose was measured, and to obtain more information about the staff exposure dose, a perspex hand phantom was used. The scatter radiation was measured at various distances and angles relative to a central point on the detector. The IQF was significantly lower for the mini C-arm resulting in a better image quality. The patients' entrance dose was 10 times higher for the mini C-arm as compared with the standard C-arm, and the scatter radiation threefold. Due to its improved image quality and overall practicality, the mini C-arm is recommended for hand surgical procedures. To ensure that the surgeons' radiation exposure is not exceeding the safety limits, monitoring radiation exposure using mini C-arms with flat panel technology during surgery should be done in a future clinical study.
Application of machine learning for the evaluation of turfgrass plots using aerial images
NASA Astrophysics Data System (ADS)
Ding, Ke; Raheja, Amar; Bhandari, Subodh; Green, Robert L.
2016-05-01
Historically, investigation of turfgrass characteristics have been limited to visual ratings. Although relevant information may result from such evaluations, final inferences may be questionable because of the subjective nature in which the data is collected. Recent advances in computer vision techniques allow researchers to objectively measure turfgrass characteristics such as percent ground cover, turf color, and turf quality from the digital images. This paper focuses on developing a methodology for automated assessment of turfgrass quality from aerial images. Images of several turfgrass plots of varying quality were gathered using a camera mounted on an unmanned aerial vehicle. The quality of these plots were also evaluated based on visual ratings. The goal was to use the aerial images to generate quality evaluations on a regular basis for the optimization of water treatment. Aerial images are used to train a neural network so that appropriate features such as intensity, color, and texture of the turfgrass are extracted from these images. Neural network is a nonlinear classifier commonly used in machine learning. The output of the neural network trained model is the ratings of the grass, which is compared to the visual ratings. Currently, the quality and the color of turfgrass, measured as the greenness of the grass, are evaluated. The textures are calculated using the Gabor filter and co-occurrence matrix. Other classifiers such as support vector machines and simpler linear regression models such as Ridge regression and LARS regression are also used. The performance of each model is compared. The results show encouraging potential for using machine learning techniques for the evaluation of turfgrass quality and color.
NASA Astrophysics Data System (ADS)
Srivastava, Vishal; Dalal, Devjyoti; Kumar, Anuj; Prakash, Surya; Dalal, Krishna
2018-06-01
Moisture content is an important feature of fruits and vegetables. As 80% of apple content is water, so decreasing the moisture content will degrade the quality of apples (Golden Delicious). The computational and texture features of the apples were extracted from optical coherence tomography (OCT) images. A support vector machine with a Gaussian kernel model was used to perform automated classification. To evaluate the quality of wax coated apples during storage in vivo, our proposed method opens up the possibility of fully automated quantitative analysis based on the morphological features of apples. Our results demonstrate that the analysis of the computational and texture features of OCT images may be a good non-destructive method for the assessment of the quality of apples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, J.
This imaging educational program will focus on solutions to common pediatric image quality optimization challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. One of the most commonly encountered pediatric imaging requirements for the non-specialist hospital is pediatric CT in the emergency room setting. Thus, this educational program will begin with optimization of pediatric CT in the emergency department. Though pediatric cardiovascular MRI may be less common in the non-specialist hospitals, low pediatric volumes and unique cardiovascular anatomy make optimization of these techniques difficult. Therefore, our second speaker willmore » review best practices in pediatric cardiovascular MRI based on experiences from a children’s hospital with a large volume of cardiac patients. Learning Objectives: To learn techniques for optimizing radiation dose and image quality for CT of children in the emergency room setting. To learn solutions for consistently high quality cardiovascular MRI of children.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Shen, C; Wang, J
Purpose: To reduce cone beam CT (CBCT) imaging dose, we previously proposed a progressive dose control (PDC) scheme to employ temporal correlation between CBCT images at different fractions for image quality enhancement. A temporal non-local means (TNLM) method was developed to enhance quality of a new low-dose CBCT using existing high-quality CBCT. To enhance a voxel value, the TNLM method searches for similar voxels in a window. Due to patient deformation among the two CBCTs, a large searching window was required, reducing image quality and computational efficiency. This abstract proposes a deformation-assisted TNLM (DA-TNLM) method to solve this problem. Methods:more » For a low-dose CBCT to be enhanced using a high-quality CBCT, we first performed deformable image registration between the low-dose CBCT and the high-quality CBCT to approximately establish voxel correspondence between the two. A searching window for a voxel was then set based on the deformation vector field. Specifically, the search window for each voxel was shifted by the deformation vector. A TNLM step was then applied using only voxels within this determined window to correct image intensity at the low-dose CBCT. Results: We have tested the proposed scheme on simulated CIRS phantom data and real patient data. The CITS phantom was scanned on Varian onboard imaging CBCT system with coach shifting and dose reducing for each time. The real patient data was acquired in four fractions with dose reduced from standard CBCT dose to 12.5% of standard dose. It was found that the DA-TNLM method can reduce total dose by over 75% on average in the first four fractions. Conclusion: We have developed a PDC scheme which can enhance the quality of image scanned at low dose using a DA-TNLM method. Tests in phantom and patient studies demonstrated promising results.« less
Koplay, Mustafa; Celik, Mahmut; Avcı, Ahmet; Erdogan, Hasan; Demir, Kenan; Sivri, Mesut; Nayman, Alaaddin
2015-01-01
We aimed to report the image quality, relationship between heart rate and image quality, amount of contrast agent given to the patients and radiation doses in coronary CT angiography (CTA) obtained by using high-pitch prospectively ECG-gated "Flash Spiral" technique (method A) or retrospectively ECG-gated technique (method B) using 128×2-slice dual-source CT. A total of 110 patients who were evaluated with method A and method B technique with a 128×2-detector dual-source CT device were included in the study. Patients were divided into three groups based on their heart rates during the procedure, and a relationship between heart rate and image quality were evaluated. The relationship between heart rate, gender and radiation dose received by the patients was compared. A total of 1760 segments were evaluated in terms of image quality. Comparison of the relationship between heart rate and image quality revealed a significant difference between heart rate <60 beats/min group and >75 beats/min group whereas <60 beats/min and 60-75 beats/min groups did not differ significantly. The average effective dose for coronary CTA was calculated as 1.11 mSv (0.47-2.01 mSv) for method A and 8.22 mSv (2.19-12.88 mSv) for method B. Method A provided high quality images with doses as low as <1 mSv in selected patients who have low heart rates with a high negative predictive value to rule out coronary artery disease. Although method B increases the amount of effective dose, it provides high diagnostic quality images for patients who have a high heart rate and arrhythmia which makes it is difficult to obtain images.
Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method
NASA Astrophysics Data System (ADS)
Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao
2016-09-01
To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
Onboard TDI stage estimation and calibration using SNR analysis
NASA Astrophysics Data System (ADS)
Haghshenas, Javad
2017-09-01
Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is performed based on the noise analysis of TDI sensors for very high GSDs and low light level missions. It is well demonstrated that the CCD TDI mode of operation provides increased photosensitivity relative to a linear CCD array, without the sacrifice of spatial resolution. However, for satellite imaging, in order to utilize the advantages which the TDI mode of operation offers, attention should be given to the parameters which affect the image quality of TDI sensors such as jitters, vibrations, noises and etc. A predefined TDI stages may not properly satisfy image quality requirement of the satellite camera. Furthermore, in order to use the whole dynamic range of the sensor, imager must be capable to set the TDI stages in every shots based on the affecting parameters. This paper deals with the optimal estimation and setting the stages based on tradeoffs among MTF, noises and SNR. On-board SNR estimation is simulated using the atmosphere analysis based on the MODTRAN algorithm in PcModWin software. According to the noises models, we have proposed a formulation to estimate TDI stages in such a way to satisfy the system SNR requirement. On the other hand, MTF requirement must be satisfy in the same manner. A proper combination of both parameters will guaranty the full dynamic range usage along with the high SNR and image quality.
Deterministic compressive sampling for high-quality image reconstruction of ultrasound tomography.
Huy, Tran Quang; Tue, Huynh Huu; Long, Ton That; Duc-Tan, Tran
2017-05-25
A well-known diagnostic imaging modality, termed ultrasound tomography, was quickly developed for the detection of very small tumors whose sizes are smaller than the wavelength of the incident pressure wave without ionizing radiation, compared to the current gold-standard X-ray mammography. Based on inverse scattering technique, ultrasound tomography uses some material properties such as sound contrast or attenuation to detect small targets. The Distorted Born Iterative Method (DBIM) based on first-order Born approximation is an efficient diffraction tomography approach. One of the challenges for a high quality reconstruction is to obtain many measurements from the number of transmitters and receivers. Given the fact that biomedical images are often sparse, the compressed sensing (CS) technique could be therefore effectively applied to ultrasound tomography by reducing the number of transmitters and receivers, while maintaining a high quality of image reconstruction. There are currently several work on CS that dispose randomly distributed locations for the measurement system. However, this random configuration is relatively difficult to implement in practice. Instead of it, we should adopt a methodology that helps determine the locations of measurement devices in a deterministic way. For this, we develop the novel DCS-DBIM algorithm that is highly applicable in practice. Inspired of the exploitation of the deterministic compressed sensing technique (DCS) introduced by the authors few years ago with the image reconstruction process implemented using l 1 regularization. Simulation results of the proposed approach have demonstrated its high performance, with the normalized error approximately 90% reduced, compared to the conventional approach, this new approach can save half of number of measurements and only uses two iterations. Universal image quality index is also evaluated in order to prove the efficiency of the proposed approach. Numerical simulation results indicate that CS and DCS techniques offer equivalent image reconstruction quality with simpler practical implementation. It would be a very promising approach in practical applications of modern biomedical imaging technology.
Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.
Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A
2016-04-01
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automatic detection system of shaft part surface defect based on machine vision
NASA Astrophysics Data System (ADS)
Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang
2015-05-01
Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms
NASA Astrophysics Data System (ADS)
Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.
Quality assessment of butter cookies applying multispectral imaging
Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne
2013-01-01
A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036
NASA Astrophysics Data System (ADS)
Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.
2017-03-01
The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.
Cost-effective handling of digital medical images in the telemedicine environment.
Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel
2007-09-01
This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd
Fukumitsu, Nobuyoshi; Ishida, Masaya; Terunuma, Toshiyuki; Mizumoto, Masashi; Hashimoto, Takayuki; Moritake, Takashi; Okumura, Toshiyuki; Sakae, Takeji; Tsuboi, Koji; Sakurai, Hideyuki
2012-01-01
To investigate the reproducibility of computed tomography (CT) imaging quality in respiratory-gated radiation treatment planning is essential in radiotherapy of movable tumors. Seven series of regular and six series of irregular respiratory motions were performed using a thorax dynamic phantom. For the regular respiratory motions, the respiratory cycle was changed from 2.5 to 4 s and the amplitude was changed from 4 to 10 mm. For the irregular respiratory motions, a cycle of 2.5 to 4 or an amplitude of 4 to 10 mm was added to the base data (i.e. 3.5-s cycle, 6-mm amplitude) every three cycles. Images of the object were acquired six times using respiratory-gated data acquisition. The volume of the object was calculated and the reproducibility of the volume was decided based on the variety. The registered image of the object was added and the reproducibility of the shape was decided based on the degree of overlap of objects. The variety in the volumes and shapes differed significantly as the respiratory cycle changed according to regular respiratory motions. In irregular respiratory motion, shape reproducibility was further inferior, and the percentage of overlap among the six images was 35.26% in the 2.5- and 3.5-s cycle mixed group. Amplitude changes did not produce significant differences in the variety of the volumes and shapes. Respiratory cycle changes reduced the reproducibility of the image quality in respiratory-gated CT. PMID:22966173
Application of side-oblique image-motion blur correction to Kuaizhou-1 agile optical images.
Sun, Tao; Long, Hui; Liu, Bao-Cheng; Li, Ying
2016-03-21
Given the recent development of agile optical satellites for rapid-response land observation, side-oblique image-motion (SOIM) detection and blur correction have become increasingly essential for improving the radiometric quality of side-oblique images. The Chinese small-scale agile mapping satellite Kuaizhou-1 (KZ-1) was developed by the Harbin Institute of Technology and launched for multiple emergency applications. Like other agile satellites, KZ-1 suffers from SOIM blur, particularly in captured images with large side-oblique angles. SOIM detection and blur correction are critical for improving the image radiometric accuracy. This study proposes a SOIM restoration method based on segmental point spread function detection. The segment region width is determined by satellite parameters such as speed, height, integration time, and side-oblique angle. The corresponding algorithms and a matrix form are proposed for SOIM blur correction. Radiometric objective evaluation indices are used to assess the restoration quality. Beijing regional images from KZ-1 are used as experimental data. The radiometric quality is found to increase greatly after SOIM correction. Thus, the proposed method effectively corrects image motion for KZ-1 agile optical satellites.
Shilemay, Moshe; Rozban, Daniel; Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S; Yadid-Pecht, Orly; Abramovich, Amir
2013-03-01
Inexpensive millimeter-wavelength (MMW) optical digital imaging raises a challenge of evaluating the imaging performance and image quality because of the large electromagnetic wavelengths and pixel sensor sizes, which are 2 to 3 orders of magnitude larger than those of ordinary thermal or visual imaging systems, and also because of the noisiness of the inexpensive glow discharge detectors that compose the focal-plane array. This study quantifies the performances of this MMW imaging system. Its point-spread function and modulation transfer function were investigated. The experimental results and the analysis indicate that the image quality of this MMW imaging system is limited mostly by the noise, and the blur is dominated by the pixel sensor size. Therefore, the MMW image might be improved by oversampling, given that noise reduction is achieved. Demonstration of MMW image improvement through oversampling is presented.
Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources
NASA Astrophysics Data System (ADS)
Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling
2018-04-01
The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.
A new evaluation method research for fusion quality of infrared and visible images
NASA Astrophysics Data System (ADS)
Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda
2017-03-01
In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.
The potential for neurovascular intravenous angiography using K-edge digital subtraction angiography
NASA Astrophysics Data System (ADS)
Schültke, E.; Fiedler, S.; Kelly, M.; Griebel, R.; Juurlink, B.; LeDuc, G.; Estève, F.; Le Bas, J.-F.; Renier, M.; Nemoz, C.; Meguro, K.
2005-08-01
Background: Catheterization of small-caliber blood vessels in the central nervous system can be extremely challenging. Alternatively, intravenous (i.v.) administration of contrast agent is minimally invasive and therefore carries a much lower risk for the patient. With conventional X-ray equipment, volumes of contrast agent that could be safely administered to the patient do not allow acquisition of high-quality images after i.v. injection, because the contrast bolus is extremely diluted by passage through the heart. However, synchrotron-based digital K-edge subtraction angiography does allow acquisition of high-quality images after i.v. administration of relatively small doses of contrast agent. Materials and methods: Eight adult male New Zealand rabbits were used for our experiments. Animals were submitted to both angiography with conventional X-ray equipment and synchrotron-based digital subtraction angiography. Results: With conventional X-ray equipment, no contrast was seen in either cerebral or spinal blood vessels after i.v. injection of iodinated contrast agent. However, using K-edge digital subtraction angiography, as little as 1 ml iodinated contrast agent, when administered as i.v. bolus, yielded images of small-caliber blood vessels in the central nervous system (both brain and spinal cord). Conclusions: If it would be possible to image blood vessels of the same diameter in the central nervous system of human patients, the synchrotron-based technique could yield high-quality images at a significantly lower risk for the patient than conventional X-ray imaging. Images could be acquired where catheterization of feeding blood vessels has proven impossible.
Contrast-based sensorless adaptive optics for retinal imaging.
Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew
2015-09-01
Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Pahn, Gregor; Skornitzke, Stephan; Schlemmer, Hans-Peter; Kauczor, Hans-Ulrich; Stiller, Wolfram
2016-01-01
Based on the guidelines from "Report 87: Radiation Dose and Image-quality Assessment in Computed Tomography" of the International Commission on Radiation Units and Measurements (ICRU), a software framework for automated quantitative image quality analysis was developed and its usability for a variety of scientific questions demonstrated. The extendable framework currently implements the calculation of the recommended Fourier image quality (IQ) metrics modulation transfer function (MTF) and noise-power spectrum (NPS), and additional IQ quantities such as noise magnitude, CT number accuracy, uniformity across the field-of-view, contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of simulated lesions for a commercially available cone-beam phantom. Sample image data were acquired with different scan and reconstruction settings on CT systems from different manufacturers. Spatial resolution is analyzed in terms of edge-spread function, line-spread-function, and MTF. 3D NPS is calculated according to ICRU Report 87, and condensed to 2D and radially averaged 1D representations. Noise magnitude, CT numbers, and uniformity of these quantities are assessed on large samples of ROIs. Low-contrast resolution (CNR, SNR) is quantitatively evaluated as a function of lesion contrast and diameter. Simultaneous automated processing of several image datasets allows for straightforward comparative assessment. The presented framework enables systematic, reproducible, automated and time-efficient quantitative IQ analysis. Consistent application of the ICRU guidelines facilitates standardization of quantitative assessment not only for routine quality assurance, but for a number of research questions, e.g. the comparison of different scanner models or acquisition protocols, and the evaluation of new technology or reconstruction methods. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Matheus, B.; Verçosa, L. B.; Barufaldi, B.; Schiabel, H.
2014-03-01
With the absolute prevalence of digital images in mammography several new tools became available for radiologist; such as CAD schemes, digital zoom and contrast alteration. This work focuses in contrast variation and how the radiologist reacts to these changes when asked to evaluated image quality. Three contrast enhancing techniques were used in this study: conventional equalization, CCB Correction [1] - a digitization correction - and value subtraction. A set of 100 images was used in tests from some available online mammographic databases. The tests consisted of the presentation of all four versions of an image (original plus the three contrast enhanced images) to the specialist, requested to rank each one from the best up to worst quality for diagnosis. Analysis of results has demonstrated that CCB Correction [1] produced better images in almost all cases. Equalization, which mathematically produces a better contrast, was considered the worst for mammography image quality enhancement in the majority of cases (69.7%). The value subtraction procedure produced images considered better than the original in 84% of cases. Tests indicate that, for the radiologist's perception, it seems more important to guaranty full visualization of nuances than a high contrast image. Another result observed is that the "ideal" scanner curve does not yield the best result for a mammographic image. The important contrast range is the middle of the histogram, where nodules and masses need to be seen and clearly distinguished.
TU-F-9A-01: Balancing Image Quality and Dose in Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peck, D; Pasciak, A
2014-06-15
Emphasis is often placed on minimizing radiation dose in diagnostic imaging without a complete consideration of the effect on image quality, especially those that affect diagnostic accuracy. This session will include a patient image-based review of diagnostic quantities important to radiologists in conventional radiography, including the effects of body habitus, age, positioning, and the clinical indication of the exam. The relationships between image quality, radiation dose, and radiation risk will be discussed, specifically addressing how these factors are affected by image protocols and acquisition parameters and techniques. This session will also discuss some of the actual and perceived radiation riskmore » associated with diagnostic imaging. Regardless if the probability for radiation-induced cancer is small, the fear associated with radiation persists. Also when a risk has a benefit to an individual or to society, the risk may be justified with respect to the benefit. But how do you convey the risks and the benefits to people? This requires knowledge of how people perceive risk and how to communicate the risk and the benefit to different populations. In this presentation the sources of errors in estimating risk from radiation and some methods used to convey risks are reviewed. Learning Objectives: Understand the image quality metrics that are clinically relevant to radiologists. Understand how acquisition parameters and techniques affect image quality and radiation dose in conventional radiology. Understand the uncertainties in estimates of radiation risk from imaging exams. Learn some methods for effectively communicating radiation risk to the public.« less
An approach for quantitative image quality analysis for CT
NASA Astrophysics Data System (ADS)
Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe
2016-03-01
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
USDA-ARS?s Scientific Manuscript database
An acousto-optic tunable filter-based hyperspectral microscope imaging method has potential for identification of foodborne pathogenic bacteria from microcolony rapidly with a single cell level. We have successfully developed the method to acquire quality hyperspectral microscopic images from variou...