Science.gov

Sample records for adequate image quality

  1. Achieving adequate BMP`s for stormwater quality management

    SciTech Connect

    Jones-Lee, A.; Lee, G.F.

    1994-12-31

    There is considerable controversy about the technical appropriateness and the cost-effectiveness of requiring cities to control contaminants in urban stormwater discharges to meet state water quality standards equivalent to US EPA numeric chemical water quality criteria. At this time and likely for the next 10 years, urban stormwater discharges will be exempt from regulation to achieve state water quality standards in receiving waters, owing to the high cost to cities of the management of contaminants in the stormwater runoff-discharge so as to prevent exceedances of water quality standards in the receiving waters. Instead of requiring the same degree of contaminant control for stormwater discharges as is required for point-source discharges of municipal and industrial wastewaters, those responsible for urban stormwater discharges will have to implement Best Management Practices (BMP`s) for contaminant control. The recommended approach for implementation of BMP`s involves the use of site-specific evaluations of what, if any, real problems (use impairment) are caused by stormwater-associated contaminants in the waters receiving that stormwater discharge. From this type of information BMP`s can then be developed to control those contaminants in stormwater discharges that are, in fact, impairing the beneficial uses of receiving waters.

  2. Image quality analyzer

    NASA Astrophysics Data System (ADS)

    Lukin, V. P.; Botugina, N. N.; Emaleev, O. N.; Antoshkin, L. V.; Konyaev, P. A.

    2012-07-01

    Image quality analyzer (IQA) which used as device for efficiency analysis of adaptive optics application is described. In analyzer marketed possibility estimations quality of images on three different criterions of quality images: contrast, sharpnesses and the spectral criterion. At present given analyzer is introduced on Big Solar Vacuum Telescope in stale work that allows at observations to conduct the choice of the most contrasting images of Sun. Is it hereinafter planned use the analyzer in composition of the ANGARA adaptive correction system.

  3. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  4. Do Foley Catheters Adequately Drain the Bladder? Evidence from CT Imaging Studies

    PubMed Central

    Avulova, Svetlana; Li, Valery J.; Khusid, Johnathan A.; Choi, Woo S.; Weiss, Jeffrey P.

    2015-01-01

    ABSTRACT Introduction: The Foley catheter has been widely assumed to be an effective means of draining the bladder. However, recent studies have brought into question its efficacy. The objective of our study is to further assess the adequacy of Foley catheter for complete drainage of the bladder. Materials and Methods: Consecutive catheterized patients were identified from a retrospective review of contrast enhanced and non-contrast enhanced computed tomo-graphic (CT) abdomen and pelvis studies completed from 7/1/2011-6/30/2012. Residual urine volume (RUV) was measured using 5mm axial CT sections as follows: The length (L) and width (W) of the bladder in the section with the greatest cross sectional area was combined with bladder height (H) as determined by multiplanar reformatted images in order to calculate RUV by applying the formula for the volume (V) of a sphere in a cube: V=(ϖ/6)*(L*W*H). Results: RUVs of 167 (mean age 67) consecutively catheterized men (n=72) and women (n=95) identified by CT abdomen and pelvis studies were calculated. The mean RUV was 13.2 mL (range: 0.0 mL-859.1 mL, standard deviation: 75.9 mL, margin of error at 95% confidence:11.6 mL). Four (2.4%) catheterized patients had RUVs of >50 mL, two of whom had an improperly placed catheter tip noted on their CT-reports. Conclusions: Previous studies have shown that up to 43% of catheterized patients had a RUV greater than 50 mL, suggesting inadequacy of bladder drainage via the Foley catheter. Our study indicated that the vast majority of patients with Foley catheters (97.6%), had adequately drained bladders with volumes of <50 mL. PMID:26200550

  5. Perceptual image quality and telescope performance ranking

    NASA Astrophysics Data System (ADS)

    Lentz, Joshua K.; Harvey, James E.; Marshall, Kenneth H.; Salg, Joseph; Houston, Joseph B.

    2010-08-01

    Launch Vehicle Imaging Telescopes (LVIT) are expensive, high quality devices intended for improving the safety of vehicle personnel, ground support, civilians, and physical assets during launch activities. If allowed to degrade from the combination of wear, environmental factors, and ineffective or inadequate maintenance, these devices lose their ability to provide adequate quality imagery to analysts to prevent catastrophic events such as the NASA Space Shuttle, Challenger, accident in 1986 and the Columbia disaster of 2003. A software tool incorporating aberrations and diffraction that was developed for maintenance evaluation and modeling of telescope imagery is presented. This tool provides MTF-based image quality metric outputs which are correlated to ascent imagery analysts' perception of image quality, allowing a prediction of usefulness of imagery which would be produced by a telescope under different simulated conditions.

  6. Diet quality of Italian yogurt consumers: an application of the probability of adequate nutrient intake score (PANDiet).

    PubMed

    Mistura, Lorenza; D'Addezio, Laura; Sette, Stefania; Piccinelli, Raffaela; Turrini, Aida

    2016-01-01

    The diet quality in yogurt consumers and non-consumers was evaluated by applying the probability of adequate nutrient intake (PANDiet) index to a sample of adults and elderly from the Italian food consumption survey INRAN SCAI 2005-06. Overall, yogurt consumers had a significantly higher mean intake of energy, calcium and percentage of energy from total sugars whereas the mean percentage of energy from total fat, saturated fatty acid and total carbohydrate were significantly (p < 0.01) lower than in non-consumers. The PANDiet index was significantly higher in yogurt consumers than in non-consumers, (60.58 ± 0.33 vs. 58.58 ± 0.19, p < 0.001). The adequacy sub-score for 17 nutrients for which usual intake should be above the reference value was significantly higher among yogurt consumers. The items of calcium, potassium and riboflavin showed the major percentage variation between consumers and non-consumers. Yogurt consumers were more likely to have adequate intakes of vitamins and minerals, and a higher quality score of the diet. PMID:26906103

  7. Colour imaging in the monitoring and documentation of choroidal naevi. Are Optomap colour images adequate for this purpose?

    PubMed

    Paul Brett, Jonathan; Lake, Amy; Downes, Susan

    2016-01-01

    An audit project to evaluate and compare three different imaging systems used to photograph choroidal naevi and to determine whether the Optos Optomap(®) can be used as the only colour image capture system for monitoring and documenting choroidal naevi. A further aim was to assess whether existing protocols could be improved to accurately document position and appearance of choroidal naevi. Twenty patients with choroidal naevi were photographed on three different colour image capture systems. Colour images were taken on the Optomap(®) wide field P200MA camera; the Zeiss FF450plus(®) mydriatic camera and the Topcon TRC-NW6S(®). All images were reviewed retrospectively by a medical retina consultant (SD) who completed a questionnaire to determine the most effective photographic system(s) in demonstrating the location of the naevi and the features of the condition. The Optomap(®) was the most effective in pinpointing the location of the naevus and the Zeiss FF450plus mydriatic camera best captured the features of the naevus. The non-mydriatic camera was rated the least satisfactory for both tasks. The location of the naevus on the retina should determine the choice of modality. If it is possible to photograph the lesion and include the optic disc or central macula, then the mydriatic camera is considered the best modality for recording both the position and features of the pathology. However, if it is not possible, because of the location, to include both the disc or central macula with the lesion in the same frame, then the Optomap(®) should be used to photograph the naevus to record its position and ideally a colour image on the mydriatic camera should also be taken to record the appearance of the lesion. PMID:27253508

  8. Image Enhancement, Image Quality, and Noise

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2005-01-01

    The Multiscale Retinex With Color Restoration (MSRCR) is a non-linear image enhancement algorithm that provides simultaneous dynamic range compression, color constancy and rendition. The overall impact is to brighten up areas of poor contrast/lightness but not at the expense of saturating areas of good contrast/brightness. The downside is that with the poor signal-to-noise ratio that most image acquisition devices have in dark regions, noise can also be greatly enhanced thus affecting overall image quality. In this paper, we will discuss the impact of the MSRCR on the overall quality of an enhanced image as a function of the strength of shadows in an image, and as a function of the root-mean-square (RMS) signal-to-noise (SNR) ratio of the image.

  9. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced

  10. Quality assessment for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2014-11-01

    Image quality assessment is an essential value judgement approach for many applications. Multi & hyper spectral imaging has more judging essentials than grey scale or RGB imaging and its image quality assessment job has to cover up all-around evaluating factors. This paper presents an integrating spectral imaging quality assessment project, in which spectral-based, radiometric-based and spatial-based statistical behavior for three hyperspectral imagers are jointly executed. Spectral response function is worked out based on discrete illumination images and its spectral performance is deduced according to its FWHM and spectral excursion value. Radiometric response ability of different spectral channel under both on-ground and airborne imaging condition is judged by SNR computing based upon local RMS extraction and statistics method. Spatial response evaluation of the spectral imaging instrument is worked out by MTF computing with slanted edge analysis method. Reported pioneering systemic work in hyperspectral imaging quality assessment is carried out with the help of several domestic dominating work units, which not only has significance in the development of on-ground and in-orbit instrument performance evaluation technique but also takes on reference value for index demonstration and design optimization for instrument development.

  11. Quality management in cardiopulmonary imaging.

    PubMed

    Kanne, Jeffrey P

    2011-02-01

    Increased scrutiny of the practice of medicine by government, insurance providers, and individual patients has led to a rapid growth of quality management programs in health care. Radiology is no exception to this trend, and quality management has become an important issue for individual radiologists as well as their respective practices. Quality control has been a mainstay of the practice of radiology for many years, with quality assurance and quality improvement both relative newcomers. This article provides an overview of quality management in the context of cardiopulmonary imaging and describes specific areas of cardiopulmonary radiology in which the components of a quality management program can be integrated. Specific quality components are discussed, and examples of quality initiatives are provided.

  12. Visual Limits To Image Quality

    NASA Astrophysics Data System (ADS)

    Granger, Edward M.

    1985-07-01

    Today's high speed computers, large and inexpensive memory devices and high definition displays have opened up the area of electronic image processing. Computers are being used to compress,enhance,and geometrically correct a wide range of image related data. It is necessary to develop Image Quality Merit Factors (IOW) that can be used to evaluate, compare, and specify imaging systems. A meaningful IQMF will have to include both the effects of the transfer function of the system and the noise introduced by the system. Most of the methods used to date have utilized linear system techniques to describe performance. In our work on the IOMF, we have found that it may be necessary to imitate the eye-brain combination in order to best describe the performance of an imaging system. This paper presents the idea that understanding the organization of and the rivalry between visual mechanisms may lead to new ways of considering photographic and electronic system image quality and the loss in image quality due to grain, halftones, and pixel noise.

  13. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  14. Fovea based image quality assessment

    NASA Astrophysics Data System (ADS)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  15. Landsat image data quality studies

    NASA Technical Reports Server (NTRS)

    Schueler, C. F.; Salomonson, V. V.

    1985-01-01

    Preliminary results of the Landsat-4 Image Data Quality Analysis (LIDQA) program to characterize the data obtained using the Thematic Mapper (TM) instrument on board the Landsat-4 and Landsat-5 satellites are reported. TM design specifications were compared to the obtained data with respect to four criteria, including spatial resolution; geometric fidelity; information content; and image relativity to Multispectral Scanner (MSS) data. The overall performance of the TM was rated excellent despite minor instabilities and radiometric anomalies in the data. Spatial performance of the TM exceeded design specifications in terms of both image sharpness and geometric accuracy, and the image utility of the TM data was at least twice as high as MSS data. The separability of alfalfa and sugar beet fields in a TM image is demonstrated.

  16. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  17. Automated quality assessment in three-dimensional breast ultrasound images.

    PubMed

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects. PMID:27158633

  18. The utilization of orbital images as an adequate form of control of preserved areas. [Araguaia National Park, Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, J. R.

    1981-01-01

    The synoptic view and the repetitive acquisition of LANDSAT imagery provide precise information, in real-time, for monitoring preserved areas based on spectral, temporal and spatial properties. The purpose of this study was to monitor, with the use of multispectral imagery, the systematic annual burning, which causes the degradation of ecosystems in the National Park of Araguaia. LANDSAT imagery of channel 5 (0.6 a 0.7 microns) and 7 (0.8 a 1.1 microns), at the scale of 1:250.000, were used to identify and delimit vegetation units and burned area, based on photointerpretation parameter of tonality. The results show that the gallery forest can be discriminated from the seasonally flooded 'campo cerrado', and that 4,14% of the study area was burned. Conclusions point out that the LANDSAT images can be used for the implementation of environmental protection in national parks.

  19. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  20. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  1. Estimation of adequate setup margins and threshold for position errors requiring immediate attention in head and neck cancer radiotherapy based on 2D image guidance

    PubMed Central

    2013-01-01

    Background We estimated sufficient setup margins for head-and-neck cancer (HNC) radiotherapy (RT) when 2D kV images are utilized for routine patient setup verification. As another goal we estimated a threshold for the displacements of the most important bony landmarks related to the target volumes requiring immediate attention. Methods We analyzed 1491 orthogonal x-ray images utilized in RT treatment guidance for 80 HNC patients. We estimated overall setup errors and errors for four subregions to account for patient rotation and deformation: the vertebrae C1-2, C5-7, the occiput bone and the mandible. Setup margins were estimated for two 2D image guidance protocols: i) imaging at first three fractions and weekly thereafter and ii) daily imaging. Two 2D image matching principles were investigated: i) to the vertebrae in the middle of planning target volume (PTV) (MID_PTV) and ii) minimizing maximal position error for the four subregions (MIN_MAX). The threshold for the position errors was calculated with two previously unpublished methods based on the van Herk’s formula and clinical data by retaining a margin of 5 mm sufficient for each subregion. Results Sufficient setup margins to compensate the displacements of the subregions were approximately two times larger than were needed to compensate setup errors for rigid target. Adequate margins varied from 2.7 mm to 9.6 mm depending on the subregions related to the target, applied image guidance protocol and early correction of clinically important systematic 3D displacements of the subregions exceeding 4 mm. The MIN_MAX match resulted in smaller margins but caused an overall shift of 2.5 mm for the target center. Margins ≤ 5mm were sufficient with the MID_PTV match only through application of daily 2D imaging and the threshold of 4 mm to correct systematic displacement of a subregion. Conclusions Adequate setup margins depend remarkably on the subregions related to the target volume. When the systematic 3D

  2. Quality assurance methodology and applications to abdominal imaging PQI.

    PubMed

    Paushter, David M; Thomas, Stephen

    2016-03-01

    Quality assurance has increasingly become an integral part of medicine, with tandem goals of increasing patient safety and procedural quality, improving efficiency, lowering cost, and ultimately improving patient outcomes. This article reviews quality assurance methodology, ranging from the PDSA cycle to the application of lean techniques, aimed at operational efficiency, to continually evaluate and revise the health care environment. Alignment of goals for practices, hospitals, and healthcare organizations is critical, requiring clear objectives, adequate resources, and transparent reporting. In addition, there is a significant role played by regulatory bodies and oversight organizations in determining external benchmarks of quality, practice, and individual certification and reimbursement. Finally, practical application of quality principles to practice improvement projects in abdominal imaging will be presented.

  3. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance. PMID:27468398

  4. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression. PMID:23589187

  5. Image quality scaling of electrophotographic prints

    NASA Astrophysics Data System (ADS)

    Johnson, Garrett M.; Patil, Rohit A.; Montag, Ethan D.; Fairchild, Mark D.

    2003-12-01

    Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.

  6. Imaging surveillance programs for women at high breast cancer risk in Europe: Are women from ethnic minority groups adequately included? (Review).

    PubMed

    Belkić, Karen; Cohen, Miri; Wilczek, Brigitte; Andersson, Sonia; Berman, Anne H; Márquez, Marcela; Vukojević, Vladana; Mints, Miriam

    2015-09-01

    Women from ethnic minority groups, including immigrants and refugees are reported to have low breast cancer (BC) screening rates. Active, culturally-sensitive outreach is vital for increasing participation of these women in BC screening programs. Women at high BC risk and who belong to an ethnic minority group are of special concern. Such women could benefit from ongoing trials aimed at optimizing screening strategies for early BC detection among those at increased BC risk. Considering the marked disparities in BC survival in Europe and its enormous and dynamic ethnic diversity, these issues are extremely timely for Europe. We systematically reviewed the literature concerning European surveillance studies that had imaging in the protocol and that targeted women at high BC risk. The aim of the present review was thereby to assess the likelihood that women at high BC risk from minority ethnic groups were adequately included in these surveillance programs. Twenty-seven research groups in Europe reported on their imaging surveillance programs for women at increased BC risk. The benefit of strategies such as inclusion of magnetic resonance imaging and/or more intensive screening was clearly documented for the participating women at increased BC risk. However, none of the reports indicated that sufficient outreach was performed to ensure that women at increased BC risk from minority ethnic groups were adequately included in these surveillance programs. On the basis of this systematic review, we conclude that the specific screening needs of ethnic minority women at increased BC risk have not yet been met in Europe. Active, culturally-sensitive outreach is needed to identify minority women at increased BC risk and to facilitate their inclusion in on-going surveillance programs. It is anticipated that these efforts would be most effective if coordinated with the development of European-wide, population-based approaches to BC screening. PMID:26134040

  7. Imaging surveillance programs for women at high breast cancer risk in Europe: Are women from ethnic minority groups adequately included? (Review).

    PubMed

    Belkić, Karen; Cohen, Miri; Wilczek, Brigitte; Andersson, Sonia; Berman, Anne H; Márquez, Marcela; Vukojević, Vladana; Mints, Miriam

    2015-09-01

    Women from ethnic minority groups, including immigrants and refugees are reported to have low breast cancer (BC) screening rates. Active, culturally-sensitive outreach is vital for increasing participation of these women in BC screening programs. Women at high BC risk and who belong to an ethnic minority group are of special concern. Such women could benefit from ongoing trials aimed at optimizing screening strategies for early BC detection among those at increased BC risk. Considering the marked disparities in BC survival in Europe and its enormous and dynamic ethnic diversity, these issues are extremely timely for Europe. We systematically reviewed the literature concerning European surveillance studies that had imaging in the protocol and that targeted women at high BC risk. The aim of the present review was thereby to assess the likelihood that women at high BC risk from minority ethnic groups were adequately included in these surveillance programs. Twenty-seven research groups in Europe reported on their imaging surveillance programs for women at increased BC risk. The benefit of strategies such as inclusion of magnetic resonance imaging and/or more intensive screening was clearly documented for the participating women at increased BC risk. However, none of the reports indicated that sufficient outreach was performed to ensure that women at increased BC risk from minority ethnic groups were adequately included in these surveillance programs. On the basis of this systematic review, we conclude that the specific screening needs of ethnic minority women at increased BC risk have not yet been met in Europe. Active, culturally-sensitive outreach is needed to identify minority women at increased BC risk and to facilitate their inclusion in on-going surveillance programs. It is anticipated that these efforts would be most effective if coordinated with the development of European-wide, population-based approaches to BC screening.

  8. WFC3 UVIS Image Quality

    NASA Astrophysics Data System (ADS)

    Dressel, Linda

    2009-07-01

    The UVIS imaging performance over the detector will be assessed periodically {every 4 months} in two passbands {F275W and F621M} to check for image stability. The field around star 58 in the open cluster NGC188 is the chosen target because it is sufficiently dense to provide good sampling over the FOV while providing enough isolated stars to permit accurate PSF {point spread function} measurement. It is available year-round and used previously for ACS image quality assessment. The field is astrometric, and astrometric guide stars will be used, so that the plate scale and image orientation may also be determined if necessary {as in SMOV proposals 11436 and 11442}. Full frame images will be obtained at each of 4 POSTARG offset positions designed to improve sampling over the detector.This proposal is a periodic repeat {once every 4 months} of visits similar to those in SMOV proposal 11436 {activity ID WFC3-23}. The data will be analyzed using the code and techniques described in ISR WFC3 2008-40 {Hartig}. Profiles of encircled energy will be monitored and presented in an ISR. If an update to the SIAF is needed, {V2,V3} locations of stars will be obtained from the Flight Ops Sensors and Calibrations group at GSFC, the {V2,V3} of the reference pixel and the orientation of the detector will be determined by the WFC3 group, and the Telescopes group will update and deliver the SIAF to the PRDB branch.The specific PSF metrics to be examined are encircled energy for aperture diameter 0.15, 0.20, 0.25, and 0.35 arcsec, FWHM, and sharpness. {See ISR WFC3 2008-40 tables 2 and 3 and preceding text.} about 20 stars distributed over the detector will be measured in each exposure for each filter. The mean, rms, and rms of the mean will be determined for each metric. The values determined from each of the 4 exposures per filter within a visit will be compared to each other to see to what extent they are affected by "breathing". Values will be compared from visit to visit, starting

  9. WFC3 IR Image Quality

    NASA Astrophysics Data System (ADS)

    Dressel, Linda

    2009-07-01

    The IR imaging performance over the detector will be assessed periodically {every 4 months} in two passbands to check for image stability. The field around star 58 in the open cluster NGC188 is the chosen target because it is sufficiently dense to provide good sampling over the FOV while providing enough isolated stars to permit accurate PSF {point spread function} measurement. It is available year-round and used previously for ACS image quality assessment. The field is astrometric, and astrometric guide stars will be used, so that the plate scale and image orientation may also be determined if necessary {as in SMOV proposals 11437 and 11443}. Full frame images will be obtained at each of 4 POSTARG offset positions designed to improve sampling over the detector in F098M, F105W, and F160W. The PSFs will be sampled at 4 positions with subpixel shifts in filters F164N and F127M.This proposal is a periodic repeat {once every 4 months} of the visits in SMOV proposal 11437 {activity ID WFC3-24}. The data will be analyzed using the code and techniques described in ISR WFC3 2008-41 {Hartig}. Profiles of encircled energy will be monitored and presented in an ISR. If an update to the SIAF is needed, {V2,V3} locations of stars will be obtained from the Flight Ops Sensors and Calibrations group at GSFC, the {V2,V3} of the reference pixel and the orientation of the detector will be determined by the WFC3 group, and the Telescopes group will update and deliver the SIAF to the PRDB branch.The specific PSF metrics to be examined are encircled energy for aperture diameter 0.25, 0.37, and 0.60 arcsec, FWHM, and sharpness. {See ISR WFC3 2008-41 tables 2 and 3 and preceding text.} 20 stars distributed over the detector will be measured in each exposure for each filter. The mean, rms, and rms of the mean will be determined for each metric. The values determined from each of the 4 exposures per filter within a visit will be compared to each other to see to what extent they are affected

  10. Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics

    NASA Astrophysics Data System (ADS)

    King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.

    2011-03-01

    The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.

  11. Body image and quality of life in a Spanish population

    PubMed Central

    Lobera, Ignacio Jáuregui; Ríos, Patricia Bolaños

    2011-01-01

    Purpose The aim of the current study was to analyze the psychometric properties, factor structure, and internal consistency of the Spanish version of the Body Image Quality of Life Inventory (BIQLI-SP) as well as its test–retest reliability. Further objectives were to analyze different relationships with key dimensions of psychosocial functioning (ie, self-esteem, presence of psychopathological symptoms, eating and body image-related problems, and perceived stress) and to evaluate differences in body image quality of life due to gender. Patients and methods The sample comprised 417 students without any psychiatric history, recruited from the Pablo de Olavide University and the University of Seville. There were 140 men (33.57%) and 277 women (66.43%), and the mean age was 21.62 years (standard deviation = 5.12). After obtaining informed consent from all participants, the following questionnaires were administered: BIQLI, Eating Disorder Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results The BIQLI-SP shows adequate psychometric properties, and it may be useful to determine the body image quality of life in different physical conditions. A more positive body image quality of life is associated with better self-esteem, better psychological wellbeing, and fewer eating-related dysfunctional attitudes, this being more evident among women. Conclusion The BIQLI-SP may be useful to determine the body image quality of life in different contexts with regard to dermatology, cosmetic and reconstructive surgery, and endocrinology, among others. In these fields of study, a new trend has emerged to assess body image-related quality of life. PMID:21403794

  12. Evaluation of overall setup accuracy and adequate setup margins in pelvic image-guided radiotherapy: Comparison of the male and female patients

    SciTech Connect

    Laaksomaa, Marko; Kapanen, Mika; Tulijoki, Tapio; Peltola, Seppo; Hyödynmaa, Simo; Kellokumpu-Lehtinen, Pirkko-Liisa

    2014-04-01

    We evaluated adequate setup margins for the radiotherapy (RT) of pelvic tumors based on overall position errors of bony landmarks. We also estimated the difference in setup accuracy between the male and female patients. Finally, we compared the patient rotation for 2 immobilization devices. The study cohort included consecutive 64 male and 64 female patients. Altogether, 1794 orthogonal setup images were analyzed. Observer-related deviation in image matching and the effect of patient rotation were explicitly determined. Overall systematic and random errors were calculated in 3 orthogonal directions. Anisotropic setup margins were evaluated based on residual errors after weekly image guidance. The van Herk formula was used to calculate the margins. Overall, 100 patients were immobilized with a house-made device. The patient rotation was compared against 28 patients immobilized with CIVCO's Kneefix and Feetfix. We found that the usually applied isotropic setup margin of 8 mm covered all the uncertainties related to patient setup for most RT treatments of the pelvis. However, margins of even 10.3 mm were needed for the female patients with very large pelvic target volumes centered either in the symphysis or in the sacrum containing both of these structures. This was because the effect of rotation (p ≤ 0.02) and the observer variation in image matching (p ≤ 0.04) were significantly larger for the female patients than for the male patients. Even with daily image guidance, the required margins remained larger for the women. Patient rotations were largest about the lateral axes. The difference between the required margins was only 1 mm for the 2 immobilization devices. The largest component of overall systematic position error came from patient rotation. This emphasizes the need for rotation correction. Overall, larger position errors and setup margins were observed for the female patients with pelvic cancer than for the male patients.

  13. Combined terahertz imaging system for enhanced imaging quality

    NASA Astrophysics Data System (ADS)

    Dolganova, Irina N.; Zaytsev, Kirill I.; Metelkina, Anna A.; Yakovlev, Egor V.; Karasik, Valeriy E.; Yurchenko, Stanislav O.

    2016-06-01

    An improved terahertz (THz) imaging system is proposed for enhancing image quality. Imaging scheme includes THz source and detection system operated in active mode as well as in passive one. In order to homogeneously illuminate the object plane the THz reshaper is proposed. The form and internal structure of the reshaper were studied by the numerical simulation. Using different test-objects we compare imaging quality in active and passive THz imaging modes. Imaging contrast and modulation transfer functions in active and passive imaging modes show drawbacks of them in high and low spatial frequencies, respectively. The experimental results confirm the benefit of combining both imaging modes into hybrid one. The proposed algorithm of making hybrid THz image is an effective approach of retrieving maximum information about the remote object.

  14. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  15. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  16. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  17. Image Quality Ranking Method for Microscopy

    NASA Astrophysics Data System (ADS)

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-07-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics.

  18. End-to-end image quality assessment

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  19. Cartographic quality of ERTS-1 images

    NASA Technical Reports Server (NTRS)

    Welch, R. I.

    1973-01-01

    Analyses of simulated and operational ERTS images have provided initial estimates of resolution, ground resolution, detectability thresholds and other measures of image quality of interest to earth scientists and cartographers. Based on these values, including an approximate ground resolution of 250 meters for both RBV and MSS systems, the ERTS-1 images appear suited to the production and/or revision of planimetric and photo maps of 1:500,000 scale and smaller for which map accuracy standards are compatible with the imaged detail. Thematic mapping, although less constrained by map accuracy standards, will be influenced by measurement thresholds and errors which have yet to be accurately determined for ERTS images. This study also indicates the desirability of establishing a quantitative relationship between image quality values and map products which will permit both engineers and cartographers/earth scientists to contribute to the design requirements of future satellite imaging systems.

  20. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Rhodes, Laura A.; Davies, Andrew G.

    2015-03-01

    Dynamic X-ray imaging systems are used for interventional cardiac procedures to treat coronary heart disease. X-ray settings are controlled automatically by specially-designed X-ray dose control mechanisms whose role is to ensure an adequate level of image quality is maintained with an acceptable radiation dose to the patient. Current commonplace dose control designs quantify image quality by performing a simple technical measurement directly from the image. However, the utility of cardiac X-ray images is in their interpretation by a cardiologist during an interventional procedure, rather than in a technical measurement. With the long term goal of devising a clinically-relevant image quality metric for an intelligent dose control system, we aim to investigate the relationship of image noise with clinical professionals' perception of dynamic image sequences. Computer-generated noise was added, in incremental amounts, to angiograms of five different patients selected to represent the range of adult cardiac patient sizes. A two alternative forced choice staircase experiment was used to determine the amount of noise which can be added to a patient image sequences without changing image quality as perceived by clinical professionals. Twenty-five viewing sessions (five for each patient) were completed by thirteen observers. Results demonstrated scope to increase the noise of cardiac X-ray images by up to 21% +/- 8% before it is noticeable by clinical professionals. This indicates a potential for 21% radiation dose reduction since X-ray image noise and radiation dose are directly related; this would be beneficial to both patients and personnel.

  1. Continuous assessment of perceptual image quality

    NASA Astrophysics Data System (ADS)

    Hamberg, Roelof; de Ridder, Huib

    1995-12-01

    The study addresses whether subjects are able to assess the perceived quality of an image sequence continuously. To this end, a new method for assessing time-varying perceptual image quality is presented by which subjects continuously indicate the perceived strength of image quality by moving a slider along a graphical scale. The slider's position on this scale is sampled every second. In this way, temporal variations in quality can be monitored quantitatively, and a means is provided by which differences between, for example, alternative transmission systems can be analyzed in an informative way. The usability of this method is illustrated by an experiment in which, for a period of 815 s, subjects assessed the quality of still pictures comprising time-varying degrees of sharpness. Copyright (c) 1995 Optical Society of America

  2. Rendered virtual view image objective quality assessment

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Li, Xiangchun; Zhang, Yi; Peng, Kai

    2013-08-01

    The research on rendered virtual view image (RVVI) objective quality assessment is important for integrated imaging system and image quality assessment (IQA). Traditional IQA algorithms cannot be applied directly on the system receiver-side due to interview displacement and the absence of original reference. This study proposed a block-based neighbor reference (NbR) IQA framework for RVVI IQA. Neighbor views used for rendering are employed for quality assessment in the proposed framework. A symphonious factor handling noise and interview displacement is defined and applied to evaluate the contribution of the obtained quality index in each block pair. A three-stage experiment scheme is also presented to testify the proposed framework and evaluate its homogeneity performance when comparing to full reference IQA. Experimental results show the proposed framework is useful in RVVI objective quality assessment at system receiver-side and benchmarking different rendering algorithms.

  3. Tradeoffs between image quality and dose.

    PubMed

    Seibert, J Anthony

    2004-10-01

    Image quality takes on different perspectives and meanings when associated with the concept of as low as reasonably achievable (ALARA), which is chiefly focused on radiation dose delivered as a result of a medical imaging procedure. ALARA is important because of the increased radiosensitivity of children to ionizing radiation and the desire to keep the radiation dose low. By the same token, however, image quality is also important because of the need to provide the necessary information in a radiograph in order to make an accurate diagnosis. Thus, there are tradeoffs to be considered between image quality and radiation dose, which is the main topic of this article. ALARA does not necessarily mean the lowest radiation dose, nor, when implemented, does it result in the least desirable radiographic images. With the recent widespread implementation of digital radiographic detectors and displays, a new level of flexibility and complexity confronts the technologist, physicist, and radiologist in optimizing the pediatric radiography exam. This is due to the separation of the acquisition, display, and archiving events that were previously combined by the screen-film detector, which allows for compensation for under- and overexposures, image processing, and on-line image manipulation. As explained in the article, different concepts must be introduced for a better understanding of the tradeoffs encountered when dealing with digital radiography and ALARA. In addition, there are many instances during the image acquisition/display/interpretation process in which image quality and associated dose can be compromised. This requires continuous diligence to quality control and feedback mechanisms to verify that the goals of image quality, dose and ALARA are achieved.

  4. Image Acquisition and Quality in Digital Radiography.

    PubMed

    Alexander, Shannon

    2016-09-01

    Medical imaging has undergone dramatic changes and technological breakthroughs since the introduction of digital radiography. This article presents information on the development of digital radiography and types of digital radiography systems. Aspects of image quality and radiation exposure control are highlighted as well. In addition, the article includes related workplace changes and medicolegal considerations in the digital radiography environment. PMID:27601691

  5. Image quality and automatic color equalization

    NASA Astrophysics Data System (ADS)

    Chambah, M.; Rizzi, A.; Saint Jean, C.

    2007-01-01

    In the professional movie field, image quality is mainly judged visually. In fact, experts and technicians judge and determine the quality of the film images during the calibration (post production) process. As a consequence, the quality of a restored movie is also estimated subjectively by experts [26,27]. On the other hand, objective quality metrics do not necessarily correlate well with perceived quality [28]. Moreover, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their use in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements of the field [29,25]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. The ACE method, for Automatic Color Equalization [1,2], is an algorithm for digital images unsupervised enhancement. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. We present in this paper is the use of ACE as a basis of a reference free image quality metric. ACE output is an estimate of our visual perception of a scene. The assumption, tested in other papers [3,4], is that ACE enhancing images is in the way our vision system will perceive them, increases their overall perceived quality. The basic idea proposed in this paper, is that ACE output can differ from the input more or less according to the visual quality of the input image In other word, an image appears good if it is near to the visual appearance we (estimate to) have of it. Reversely bad quality images will need "more filtering". Test

  6. Holographic projection with higher image quality.

    PubMed

    Qu, Weidong; Gu, Huarong; Tan, Qiaofeng

    2016-08-22

    The spatial resolution limited by the size of the spatial light modulator (SLM) in the holographic projection can hardly be increased, and speckle noise always appears to induce the degradation of image quality. In this paper, the holographic projection with higher image quality is presented. The spatial resolution of the reconstructed image is 2 times of that of the existing holographic projection, and speckles are suppressed well at the same time. Finally, the effectiveness of the holographic projection is verified in experiments. PMID:27557197

  7. Measurement and control of color image quality

    NASA Astrophysics Data System (ADS)

    Schneider, Eric; Johnson, Kate; Wolin, David

    1998-12-01

    Color hardcopy output is subject to many of the same image quality concerns as monochrome hardcopy output. Line and dot quality, uniformity, halftone quality, the presence of bands, spots or deletions are just a few by both color and monochrome output. Although measurement of color requires the use of specialized instrumentation, the techniques used to assess color-dependent image quality attributes on color hardcopy output are based on many of the same techniques as those used in monochrome image quality quantification. In this paper we will be presenting several different aspects of color quality assessment in both R and D and production environments. As well as present several examples of color quality measurements that are similar to those currently being used at Hewlett-Packard to characterize color devices and to verify system performance. We will then discuss some important considerations for choosing appropriate color quality measurement equipment for use in either R and D or production environments. Finally, we will discuss the critical relationship between objective measurements and human perception.

  8. A database for spectral image quality

    NASA Astrophysics Data System (ADS)

    Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve

    2015-01-01

    We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.

  9. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  10. Maximising image quality in small spaces.

    PubMed

    Alford, Arezoo; Brinkworth, Simon

    2015-06-01

    A Medical Illustration Department may need to set up a studio in a space that is not designed for that purpose. This joint paper describes the attempts of two separate trusts, University Hospitals Bristol NHS Foundation Trust (UHB) and Norfolk & Norwich University Hospitals (NNUH), to refurbish unusually small studio spaces of 4m × 2m. Each trust had a substantially different project budget and faced separate obstacles, but both had a shared aim; to maximise the limited studio space and enhance the quality of images produced. The outcome at both Trusts is a significant improvement in image quality.

  11. Subjective matters: from image quality to image psychology

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; De Ridder, Huib

    2013-03-01

    From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.

  12. Quantification of image quality using information theory.

    PubMed

    Niimi, Takanaga; Maeda, Hisatoshi; Ikeda, Mitsuru; Imai, Kuniharu

    2011-12-01

    Aims of present study were to examine usefulness of information theory in visual assessment of image quality. We applied first order approximation of the Shannon's information theory to compute information losses (IL). Images of a contrast-detail mammography (CDMAM) phantom were acquired with computed radiographies for various radiation doses. Information content was defined as the entropy Σp( i )log(1/p ( i )), in which detection probabilities p ( i ) were calculated from distribution of detection rate of the CDMAM. IL was defined as the difference between information content and information obtained. IL decreased with increases in the disk diameters (P < 0.0001, ANOVA) and in the radiation doses (P < 0.002, F-test). Sums of IL, which we call total information losses (TIL), were closely correlated with the image quality figures (r = 0.985). TIL was dependent on the distribution of image reading ability of each examinee, even when average reading ratio was the same in the group. TIL was shown to be sensitive to the observers' distribution of image readings and was expected to improve the evaluation of image quality.

  13. Characteristic functionals in imaging and image-quality assessment: tutorial.

    PubMed

    Clarkson, Eric; Barrett, Harrison H

    2016-08-01

    Characteristic functionals are one of the main analytical tools used to quantify the statistical properties of random fields and generalized random fields. The viewpoint taken here is that a random field is the correct model for the ensemble of objects being imaged by a given imaging system. In modern digital imaging systems, random fields are not used to model the reconstructed images themselves since these are necessarily finite dimensional. After a brief introduction to the general theory of characteristic functionals, many examples relevant to imaging applications are presented. The propagation of characteristic functionals through both a binned and list-mode imaging system is also discussed. Methods for using characteristic functionals and image data to estimate population parameters and classify populations of objects are given. These methods are based on maximum likelihood and maximum a posteriori techniques in spaces generated by sampling the relevant characteristic functionals through the imaging operator. It is also shown how to calculate a Fisher information matrix in this space. These estimators and classifiers, and the Fisher information matrix, can then be used for image quality assessment of imaging systems.

  14. Does resolution really increase image quality?

    NASA Astrophysics Data System (ADS)

    Tisse, Christel-Loïc; Guichard, Frédéric; Cao, Frédéric

    2008-02-01

    A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels) while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies (1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio (SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel technologies.

  15. Image Quality Indicator for Infrared Inspections

    NASA Technical Reports Server (NTRS)

    Burke, Eric

    2011-01-01

    The quality of images generated during an infrared thermal inspection depends on many system variables, settings, and parameters to include the focal length setting of the IR camera lens. If any relevant parameter is incorrect or sub-optimal, the resulting IR images will usually exhibit inherent unsharpness and lack of resolution. Traditional reference standards and image quality indicators (IQIs) are made of representative hardware samples and contain representative flaws of concern. These standards are used to verify that representative flaws can be detected with the current IR system settings. However, these traditional standards do not enable the operator to quantify the quality limitations of the resulting images, i.e. determine the inherent maximum image sensitivity and image resolution. As a result, the operator does not have the ability to optimize the IR inspection system prior to data acquisition. The innovative IQI described here eliminates this limitation and enables the operator to objectively quantify and optimize the relevant variables of the IR inspection system, resulting in enhanced image quality with consistency and repeatability in the inspection application. The IR IQI consists of various copper foil features of known sizes that are printed on a dielectric non-conductive board. The significant difference in thermal conductivity between the two materials ensures that each appears with a distinct grayscale or brightness in the resulting IR image. Therefore, the IR image of the IQI exhibits high contrast between the copper features and the underlying dielectric board, which is required to detect the edges of the various copper features. The copper features consist of individual elements of various shapes and sizes, or of element-pairs of known shapes and sizes and with known spacing between the elements creating the pair. For example, filled copper circles with various diameters can be used as individual elements to quantify the image sensitivity

  16. Quality evaluation of fruit by hyperspectral imaging

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter presents new applications of hyperspectral imaging for measuring the optical properties of fruits and assessing their quality attributes. A brief overview is given of current techniques for measuring optical properties of turbid and opaque biological materials. Then a detailed descripti...

  17. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  18. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  19. Physical measures of image quality in mammography

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.

    1996-04-01

    A recently introduced method for quantitative analysis of images of the American College of Radiology (ACR) mammography accreditation phantom has been extended to include signal- to-noise-ratio (SNR) measurements, and has been applied to survey the image quality of 54 mammography machines from 17 hospitals. Participants sent us phantom images to be evaluated for each mammography machine at their hospital. Each phantom was loaned to us for obtaining images of the wax insert plate on a reference machine at our institution. The images were digitized and analyzed to yield indices that quantified the image quality of the machines precisely. We have developed methods for normalizing for the variation of the individual speck sizes between different ACR phantoms, for the variation of the speck sizes within a microcalcification group, and for variations in overall speeds of the mammography systems. In terms of the microcalcification SNR, the variability of the x-ray machines was 40.5% when no allowance was made for phantom or mAs variations. This dropped to 17.1% when phantom variability was accounted for, and to 12.7% when mAs variability was also allowed for. Our work shows the feasibility of practical, low-cost, objective and accurate evaluations, as a useful adjunct to the present ACR method.

  20. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  1. Detection of image quality metamers based on the metric for unified image quality

    NASA Astrophysics Data System (ADS)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2012-01-01

    In this paper, we introduce a concept of the image quality metamerism as an expanded version of the metamerism defined in the color science. The concept is used to unify different image quality attributes, and applied to introduce a metric showing the degree of image quality metamerism to analyze a cultural property. Our global goal is to build a metric to evaluate total quality of images acquired by different imaging systems and observed under different viewing conditions. As the basic step to the global goal, the metric is consisted of color, spectral and texture information in this research, and applied to detect image quality metamers to investigate the cultural property. The property investigated is the oldest extant version of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold colored areas painted by using high granularity colorants compared with other color areas in the property are evaluated based on the metric, then the metric is visualized as a map showing the possibility of the image quality metamer to the reference pixel.

  2. Agreement between objective and subjective assessment of image quality in ultrasound abdominal aortic aneurism screening

    PubMed Central

    Wolstenhulme, S; Keeble, C; Moore, S; Evans, J A

    2015-01-01

    Objective: To investigate agreement between objective and subjective assessment of image quality of ultrasound scanners used for abdominal aortic aneurysm (AAA) screening. Methods: Nine ultrasound scanners were used to acquire longitudinal and transverse images of the abdominal aorta. 100 images were acquired per scanner from which 5 longitudinal and 5 transverse images were randomly selected. 33 practitioners scored 90 images blinded to the scanner type and subject characteristics and were required to state whether or not the images were of adequate diagnostic quality. Odds ratios were used to rank the subjective image quality of the scanners. For objective testing, three standard test objects were used to assess penetration and resolution and used to rank the scanners. Results: The subjective diagnostic image quality was ten times greater for the highest ranked scanner than for the lowest ranked scanner. It was greater at depths of <5.0 cm (odds ratio, 6.69; 95% confidence interval, 3.56, 12.57) than at depths of 15.1–20.0 cm. There was a larger range of odds ratios for transverse images than for longitudinal images. No relationship was seen between subjective scanner rankings and test object scores. Conclusion: Large variation was seen in the image quality when evaluated both subjectively and objectively. Objective scores did not predict subjective scanner rankings. Further work is needed to investigate the utility of both subjective and objective image quality measurements. Advances in knowledge: Ratings of clinical image quality and image quality measured using test objects did not agree, even in the limited scenario of AAA screening. PMID:25494526

  3. High Image Quality Laser Color Printer

    NASA Astrophysics Data System (ADS)

    Nagao, Kimitoshi; Morimoto, Yoshinori

    1989-07-01

    A laser color printer has been developed to depict continuous tone color images on a photographic color film or color paper with high resolution and fidelity. We have used three lasers, He-Cd (441.6 nm), Ar4+ (514.5 nm), and He-Ne (632.8 nm) for blue, green, and red exposures. We have employed a drum scanner for two dimensional scanning. The maximum resolution of our system is 40 c/mm (80 lines/mm) and the accuracy of density reproduction is within 1.0 when measured in color difference, where most observers can not distinguish the difference. The scanning artifacts and noise are diminished to a visually negligible level. The image quality of output images compares well to that of actual color photographs, and is suitable for photographic image simulations.

  4. Blind image quality assessment via deep learning.

    PubMed

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness. PMID:25122842

  5. Blind image quality assessment via deep learning.

    PubMed

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.

  6. Dried fruits quality assessment by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  7. Objective assessment of image quality VI: imaging in radiation therapy

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Kupinski, Matthew A.; Müeller, Stefan; Halpern, Howard J.; Morris, John C., III; Dwyer, Roisin

    2013-11-01

    Earlier work on objective assessment of image quality (OAIQ) focused largely on estimation or classification tasks in which the desired outcome of imaging is accurate diagnosis. This paper develops a general framework for assessing imaging quality on the basis of therapeutic outcomes rather than diagnostic performance. By analogy to receiver operating characteristic (ROC) curves and their variants as used in diagnostic OAIQ, the method proposed here utilizes the therapy operating characteristic or TOC curves, which are plots of the probability of tumor control versus the probability of normal-tissue complications as the overall dose level of a radiotherapy treatment is varied. The proposed figure of merit is the area under the TOC curve, denoted AUTOC. This paper reviews an earlier exposition of the theory of TOC and AUTOC, which was specific to the assessment of image-segmentation algorithms, and extends it to other applications of imaging in external-beam radiation treatment as well as in treatment with internal radioactive sources. For each application, a methodology for computing the TOC is presented. A key difference between ROC and TOC is that the latter can be defined for a single patient rather than a population of patients.

  8. Improving secondary ion mass spectrometry image quality with image fusion.

    PubMed

    Tarolli, Jay G; Jackson, Lauren M; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  9. Improving secondary ion mass spectrometry image quality with image fusion.

    PubMed

    Tarolli, Jay G; Jackson, Lauren M; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  10. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  11. Damage and quality assessment in wheat by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Delwiche, Stephen R.; Kim, Moon S.; Dong, Yanhong

    2010-04-01

    Fusarium head blight is a fungal disease that affects the world's small grains, such as wheat and barley. Attacking the spikelets during development, the fungus causes a reduction of yield and grain of poorer processing quality. It also is a health concern because of the secondary metabolite, deoxynivalenol, which often accompanies the fungus. While chemical methods exist to measure the concentration of the mycotoxin and manual visual inspection is used to ascertain the level of Fusarium damage, research has been active in developing fast, optically based techniques that can assess this form of damage. In the current study a near-infrared (1000-1700 nm) hyperspectral image system was assembled and applied to Fusarium-damaged kernel recognition. With anticipation of an eventual multispectral imaging system design, 5 wavelengths were manually selected from a pool of 146 images as the most promising, such that when combined in pairs or triplets, Fusarium damage could be identified. We present the results of two pairs of wavelengths [(1199, 1474 nm) and (1315, 1474 nm)] whose reflectance values produced adequate separation of kernels of healthy appearance (i.e., asymptomatic condition) from kernels possessing Fusarium damage.

  12. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  13. Enhancement and quality control of GOES images

    NASA Astrophysics Data System (ADS)

    Jentoft-Nilsen, Marit; Palaniappan, Kannappan; Hasler, A. Frederick; Chesters, Dennis

    1996-10-01

    The new generation of Geostationary Operational Environmental Satellites (GOES) have an imager instrument with five multispectral bands of high spatial resolution,and very high dynamic range radiance measurements with 10-bit precision. A wide variety of environmental processes can be observed at unprecedented time scales using the new imager instrument. Quality assurance and feedback to the GOES project office is performed using rapid animation at high magnification, examining differences between successive frames, and applying radiometric and geometric correction algorithms. Missing or corrupted scanline data occur unpredictably due to noise in the ground based receiving system. Smooth high resolution noise-free animations can be recovered using automatic techniques even from scanline scratches affecting more than 25 percent of the dataset. Radiometric correction using the local solar zenith angle was applied to the visible channel to compensate for time- of-day illumination variations to produce gain-compensated movies that appear well-lit from dawn to dusk and extend the interval of useful image observations by more than two hours. A time series of brightness histograms displays some subtle quality control problems in the GOES channels related to rebinning of the radiance measurements. The human visual system is sensitive to only about half of the measured 10- bit dynamic range in intensity variations, at a given point in a monochrome image. In order to effectively use the additional bits of precision and handle the high data rate, new enhancement techniques and visualization tools were developed. We have implemented interactive image enhancement techniques to selectively emphasize different subranges of the 10-bits of intensity levels. Improving navigational accuracy using registration techniques and geometric correction of scanline interleaving errors is a more difficult problem that is currently being investigated.

  14. Standardised mortality ratio based on the sum of age and percentage total body surface area burned is an adequate quality indicator in burn care: An exploratory review.

    PubMed

    Steinvall, Ingrid; Elmasry, Moustafa; Fredrikson, Mats; Sjoberg, Folke

    2016-02-01

    Standardised Mortality Ratio (SMR) based on generic mortality predicting models is an established quality indicator in critical care. Burn-specific mortality models are preferred for the comparison among patients with burns as their predictive value is better. The aim was to assess whether the sum of age (years) and percentage total body surface area burned (which constitutes the Baux score) is acceptable in comparison to other more complex models, and to find out if data collected from a separate burn centre are sufficient for SMR based quality assessment. The predictive value of nine burn-specific models was tested by comparing values from the area under the receiver-operating characteristic curve (AUC) and a non-inferiority analysis using 1% as the limit (delta). SMR was analysed by comparing data from seven reference sources, including the North American National Burn Repository (NBR), with the observed mortality (years 1993-2012, n=1613, 80 deaths). The AUC values ranged between 0.934 and 0.976. The AUC 0.970 (95% CI 0.96-0.98) for the Baux score was non-inferior to the other models. SMR was 0.52 (95% CI 0.28-0.88) for the most recent five-year period compared with NBR based data. The analysis suggests that SMR based on the Baux score is eligible as an indicator of quality for setting standards of mortality in burn care. More advanced modelling only marginally improves the predictive value. The SMR can detect mortality differences in data from a single centre. PMID:26700877

  15. On pictures and stuff: image quality and material appearance

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2014-02-01

    Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.

  16. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  17. Do English NHS Microbiology laboratories offer adequate services for the diagnosis of UTI in children? Healthcare Quality Improvement Partnership (HQIP) Audit of Standard Operational Procedures.

    PubMed

    McNulty, Cliodna A M; Verlander, Neville Q; Moore, Philippa C L; Larcombe, James; Dudley, Jan; Banerjee, Jaydip; Jadresic, Lyda

    2015-09-01

    The National Institute of Care Excellence (NICE) 2007 guidance CG54, on urinary tract infection (UTI) in children, states that clinicians should use urgent microscopy and culture as the preferred method for diagnosing UTI in the hospital setting for severe illness in children under 3 years old and from the GP setting in children under 3 years old with intermediate risk of severe illness. NICE also recommends that all 'infants and children with atypical UTI (including non-Escherichia coli infections) should have renal imaging after a first infection'. We surveyed all microbiology laboratories in England with Clinical Pathology Accreditation to determine standard operating procedures (SOPs) for urgent microscopy, culture and reporting of children's urine and to ascertain whether the SOPs facilitate compliance with NICE guidance. We undertook a computer search in six microbiology laboratories in south-west England to determine urine submissions and urine reports in children under 3 years. Seventy-three per cent of laboratories (110/150) participated. Enterobacteriaceae that were not E. coli were reported only as coliforms (rather than non-E. coli coliforms) by 61% (67/110) of laboratories. Eighty-eight per cent of laboratories (97/110) provided urgent microscopy for hospital and 54% for general practice (GP) paediatric urines; 61% of laboratories (confidence interval 52-70%) cultured 1 μl volume of urine, which equates to one colony if the bacterial load is 106 c.f.u. l(-1). Only 22% (24/110) of laboratories reported non-E. coli coliforms and provided urgent microscopy for both hospital and GP childhood urines; only three laboratories also cultured a 5 μl volume of urine. Only one of six laboratories in our submission audit had a significant increase in urine submissions and urines reported from children less than 3 years old between the predicted pre-2007 level in the absence of guidance and the 2008 level following publication of the NICE guidance. Less than a

  18. Image quality characteristics of handheld display devices for medical imaging.

    PubMed

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2 × 10(-5) mm(2) at 1 mm(-1), while handheld displays have values lower than 3.7 × 10(-6) mm(2). Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  19. Image Quality Characteristics of Handheld Display Devices for Medical Imaging

    PubMed Central

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2×10−5 mm2 at 1 mm−1, while handheld displays have values lower than 3.7×10−6 mm2. Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  20. The influence of statistical variations on image quality

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror; Hertel, Dirk; Bullitt, Julian

    2006-01-01

    For more than thirty years imaging scientists have constructed metrics to predict psychovisually perceived image quality. Such metrics are based on a set of objectively measurable basis functions such as Noise Power Spectrum (NPS), Modulation Transfer Function (MTF), and characteristic curves of tone and color reproduction. Although these basis functions constitute a set of primitives that fully describe an imaging system from the standpoint of information theory, we found that in practical imaging systems the basis functions themselves are determined by system-specific primitives, i.e. technology parameters. In the example of a printer, MTF and NPS are largely determined by dot structure. In addition MTF is determined by color registration, and NPS by streaking and banding. Since any given imaging system is only a single representation of a class of more or less identical systems, the family of imaging systems and the single system are not described by a unique set of image primitives. For an image produced by a given imaging system, the set of image primitives describing that particular image will be a singular instantiation of the underlying statistical distribution of that primitive. If we know precisely the set of imaging primitives that describe the given image we should be able to predict its image quality. Since only the distributions are known, we can only predict the distribution in image quality for a given image as produced by the larger class of 'identical systems'. We will demonstrate the combinatorial effect of the underlying statistical variations in the image primitives on the objectively measured image quality of a population of printers as well as on the perceived image quality of a set of test images. We also will discuss the choice of test image sets and impact of scene content on the distribution of perceived image quality.

  1. Improving the Blanco Telescope's delivered image quality

    NASA Astrophysics Data System (ADS)

    Abbott, Timothy M. C.; Montane, Andrés; Tighe, Roberto; Walker, Alistair R.; Gregory, Brooke; Smith, R. Christopher; Cisternas, Alfonso

    2010-07-01

    The V. M. Blanco 4-m telescope at Cerro Tololo Inter-American Observatory is undergoing a number of improvements in preparation for the delivery of the Dark Energy Camera. The program includes upgrades having potential to deliver gains in image quality and stability. To this end, we have renovated the support structure of the primary mirror, incorporating innovations to improve both the radial support performance and the registration of the mirror and telescope top end. The resulting opto-mechanical condition of the telescope is described. We also describe some improvements to the environmental control. Upgrades to the telescope control system and measurements of the dome environment are described in separate papers in this conference.

  2. Using short-wave infrared imaging for fruit quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    Quality evaluation of agricultural and food products is important for processing, inventory control, and marketing. Fruit size and surface quality are two important quality factors for high-quality fruit such as Medjool dates. Fruit size is usually measured by length that can be done easily by simple image processing techniques. Surface quality evaluation on the other hand requires more complicated design, both in image acquisition and image processing. Skin delamination is considered a major factor that affects fruit quality and its value. This paper presents an efficient histogram analysis and image processing technique that is designed specifically for real-time surface quality evaluation of Medjool dates. This approach, based on short-wave infrared imaging, provides excellent image contrast between the fruit surface and delaminated skin, which allows significant simplification of image processing algorithm and reduction of computational power requirements. The proposed quality grading method requires very simple training procedure to obtain a gray scale image histogram for each quality level. Using histogram comparison, each date is assigned to one of the four quality levels and an optimal threshold is calculated for segmenting skin delamination areas from the fruit surface. The percentage of the fruit surface that has skin delamination can then be calculated for quality evaluation. This method has been implemented and used for commercial production and proven to be efficient and accurate.

  3. Toward More Adequate Quantitative Instructional Research.

    ERIC Educational Resources Information Center

    VanSickle, Ronald L.

    1986-01-01

    Sets an agenda for improving instructional research conducted with classical quantitative experimental or quasi-experimental methodology. Includes guidelines regarding the role of a social perspective, adequate conceptual and operational definition, quality instrumentation, control of threats to internal and external validity, and the use of…

  4. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  5. The physical and psychological factors governing sound-image quality

    NASA Astrophysics Data System (ADS)

    Kurozumi, K.; Ohgushi, K.

    1984-03-01

    One of the most important psychological impressions produced by a conventional two-loudspeakers reproduction-system is a localization of the sound image in the horizontal plane. The sound image is localized to some degree by varying the level and the time differences of the two acoustic signals. Even if the sound image was localized in the same direction, different impressions - for example, a feeling of the width of the sound image - are sometimes produced. These different impressions are explained by the phrase sound-image quality. The purpose is to find out the psychological and physical factors governing sound-image quality. To begin with, the effect on sound-image quality of varying the cross-correlation coefficient for white noise is investigated. A number of studies were performed in which the relationship between the cross-correlation coefficient and the sound-image quality were investigated.

  6. Reduced-reference image quality assessment using moment method

    NASA Astrophysics Data System (ADS)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  7. Quantitative image quality evaluation for cardiac CT reconstructions

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.

    2016-03-01

    Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.

  8. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  9. Limitations to adaptive optics image quality in rodent eyes.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2012-08-01

    Adaptive optics (AO) retinal image quality of rodent eyes is inferior to that of human eyes, despite the promise of greater numerical aperture. This paradox challenges several assumptions commonly made in AO imaging, assumptions which may be invalidated by the very high power and dioptric thickness of the rodent retina. We used optical modeling to compare the performance of rat and human eyes under conditions that tested the validity of these assumptions. Results showed that AO image quality in the human eye is robust to positioning errors of the AO corrector and to differences in imaging depth and wavelength compared to the wavefront beacon. In contrast, image quality in the rat eye declines sharply with each of these manipulations, especially when imaging off-axis. However, some latitude does exist to offset these manipulations against each other to produce good image quality.

  10. The study of surgical image quality evaluation system by subjective quality factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard

    2016-03-01

    GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.

  11. Dynamic flat panel detector versus image intensifier in cardiac imaging: dose and image quality

    NASA Astrophysics Data System (ADS)

    Vano, E.; Geiger, B.; Schreiner, A.; Back, C.; Beissel, J.

    2005-12-01

    The practical aspects of the dosimetric and imaging performance of a digital x-ray system for cardiology procedures were evaluated. The system was configured with an image intensifier (II) and later upgraded to a dynamic flat panel detector (FD). Entrance surface air kerma (ESAK) to phantoms of 16, 20, 24 and 28 cm of polymethyl methacrylate (PMMA) and the image quality of a test object were measured. Images were evaluated directly on the monitor and with numerical methods (noise and signal-to-noise ratio). Information contained in the DICOM header for dosimetry audit purposes was also tested. ESAK values per frame (or kerma rate) for the most commonly used cine and fluoroscopy modes for different PMMA thicknesses and for field sizes of 17 and 23 cm for II, and 20 and 25 cm for FD, produced similar results in the evaluated system with both technologies, ranging between 19 and 589 µGy/frame (cine) and 5 and 95 mGy min-1 (fluoroscopy). Image quality for these dose settings was better for the FD version. The 'study dosimetric report' is comprehensive, and its numerical content is sufficiently accurate. There is potential in the future to set those systems with dynamic FD to lower doses than are possible in the current II versions, especially for digital cine runs, or to benefit from improved image quality.

  12. Can pictorial images communicate the quality of pain successfully?

    PubMed Central

    Knapp, Peter; Morley, Stephen; Stones, Catherine

    2015-01-01

    Chronic pain is common and difficult for patients to communicate to health professionals. It may include neuropathic elements which require specialised treatment. A little used approach to communicating the quality of pain is through the use of images. This study aimed to test the ability of a set of 12 images depicting different sensory pain qualities to successfully communicate those qualities. Images were presented to 25 student nurses and 38 design students. Students were asked to write down words or phrases describing the quality of pain they felt was being communicated by each image. They were asked to provide as many or as few as occurred to them. The images were extremely heterogeneous in their ability to convey qualities of pain accurately. Only 2 of the 12 images were correctly interpreted by more than 70% of the sample. There was a significant difference between the two student groups, with nurses being significantly better at interpreting the images than the design students. Clearly, attention needs to be given not only to the content of images designed to depict the sensory qualities of pain but also to the differing audiences who may use them. Education, verbal ability, ethnicity and a multiplicity of other factors may influence the understanding and use of such images. Considerable work is needed to develop a set of images which is sufficiently culturally appropriate and effective for general use. PMID:26516574

  13. Image quality assessment for CT used on small animals

    NASA Astrophysics Data System (ADS)

    Cisneros, Isabela Paredes; Agulles-Pedrós, Luis

    2016-07-01

    Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters using an acrylic phantom and then, using the computational tool MatLab, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.

  14. Improving the Quality of Imaging in the Emergency Department.

    PubMed

    Blackmore, C Craig; Castro, Alexandra

    2015-12-01

    Imaging is critical for the care of emergency department (ED) patients. However, much of the imaging performed for acute care today is overutilization, creating substantial cost without significant benefit. Further, the value of imaging is not easily defined, as imaging only affects outcomes indirectly, through interaction with treatment. Improving the quality, including appropriateness, of emergency imaging requires understanding of how imaging contributes to patient care. The six-tier efficacy hierarchy of Fryback and Thornbury enables understanding of the value of imaging on multiple levels, ranging from technical efficacy to medical decision-making and higher-level patient and societal outcomes. The imaging efficacy hierarchy also allows definition of imaging quality through the Institute of Medicine (IOM)'s quality domains of safety, effectiveness, patient-centeredness, timeliness, efficiency, and equitability and provides a foundation for quality improvement. In this article, the authors elucidate the Fryback and Thornbury framework to define the value of imaging in the ED and to relate emergency imaging to the IOM quality domains.

  15. Quaternion structural similarity: a new quality index for color images.

    PubMed

    Kolaman, Amir; Yadid-Pecht, Orly

    2012-04-01

    One of the most important issues for researchers developing image processing algorithms is image quality. Methodical quality evaluation, by showing images to several human observers, is slow, expensive, and highly subjective. On the other hand, a visual quality matrix (VQM) is a fast, cheap, and objective tool for evaluating image quality. Although most VQMs are good in predicting the quality of an image degraded by a single degradation, they poorly perform for a combination of two degradations. An example for such degradation is the color crosstalk (CTK) effect, which introduces blur with desaturation. CTK is expected to become a bigger issue in image quality as the industry moves toward smaller sensors. In this paper, we will develop a VQM that will be able to better evaluate the quality of an image degraded by a combined blur/desaturation degradation and perform as well as other VQMs on single degradations such as blur, compression, and noise. We show why standard scalar techniques are insufficient to measure a combined blur/desaturation degradation and explain why a vectorial approach is better suited. We introduce quaternion image processing (QIP), which is a true vectorial approach and has many uses in the fields of physics and engineering. Our new VQM is a vectorial expansion of structure similarity using QIP, which gave it its name-Quaternion Structural SIMilarity (QSSIM). We built a new database of a combined blur/desaturation degradation and conducted a quality survey with human subjects. An extensive comparison between QSSIM and other VQMs on several image quality databases-including our new database-shows the superiority of this new approach in predicting visual quality of color images.

  16. Effect of image quality on calcification detection in digital mammography

    PubMed Central

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  17. Effect of image quality on calcification detection in digital mammography

    SciTech Connect

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-06-15

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  18. A new assessment method for image fusion quality

    NASA Astrophysics Data System (ADS)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  19. The effect of image sharpness on quantitative eye movement data and on image quality evaluation while viewing natural images

    NASA Astrophysics Data System (ADS)

    Vuori, Tero; Olkkonen, Maria

    2006-01-01

    The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.

  20. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  1. Testing scanners for the quality of output images

    NASA Astrophysics Data System (ADS)

    Concepcion, Vicente P.; Nadel, Lawrence D.; D'Amato, Donald P.

    1995-01-01

    Document scanning is the means through which documents are converted to their digital image representation for electronic storage or distribution. Among the types of documents being scanned by government agencies are tax forms, patent documents, office correspondence, mail pieces, engineering drawings, microfilm, archived historical papers, and fingerprint cards. Increasingly, the resulting digital images are used as the input for further automated processing including: conversion to a full-text-searchable representation via machine printed or handwritten (optical) character recognition (OCR), postal zone identification, raster-to-vector conversion, and fingerprint matching. These diverse document images may be bi-tonal, gray scale, or color. Spatial sampling frequencies range from about 200 pixels per inch to over 1,000. The quality of the digital images can have a major effect on the accuracy and speed of any subsequent automated processing, as well as on any human-based processing which may be required. During imaging system design, there is, therefore, a need to specify the criteria by which image quality will be judged and, prior to system acceptance, to measure the quality of images produced. Unfortunately, there are few, if any, agreed-upon techniques for measuring document image quality objectively. In the output images, it is difficult to distinguish image degradation caused by the poor quality of the input paper or microfilm from that caused by the scanning system. We propose several document image quality criteria and have developed techniques for their measurement. These criteria include spatial resolution, geometric image accuracy, (distortion), gray scale resolution and linearity, and temporal and spatial uniformity. The measurement of these criteria requires scanning one or more test targets along with computer-based analyses of the test target images.

  2. No-reference visual quality assessment for image inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  3. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  4. Method and tool for generating and managing image quality allocations through the design and development process

    NASA Astrophysics Data System (ADS)

    Sparks, Andrew W.; Olson, Craig; Theisen, Michael J.; Addiego, Chris J.; Hutchins, Tiffany G.; Goodman, Timothy D.

    2016-05-01

    Performance models for infrared imaging systems require image quality parameters; optical design engineers need image quality design goals; systems engineers develop image quality allocations to test imaging systems against. It is a challenge to maintain consistency and traceability amongst the various expressions of image quality. We present a method and parametric tool for generating and managing expressions of image quality during the system modeling, requirements specification, design, and testing phases of an imaging system design and development project.

  5. Interplay between JPEG-2000 image coding and quality estimation

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2013-03-01

    Image quality and utility estimators aspire to quantify the perceptual resemblance and the usefulness of a distorted image when compared to a reference natural image, respectively. Image-coders, such as JPEG-2000, traditionally aspire to allocate the available bits to maximize the perceptual resemblance of the compressed image when compared to a reference uncompressed natural image. Specifically, this can be accomplished by allocating the available bits to minimize the overall distortion, as computed by a given quality estimator. This paper applies five image quality and utility estimators, SSIM, VIF, MSE, NICE and GMSE, within a JPEG-2000 encoder for rate-distortion optimization to obtain new insights on how to improve JPEG-2000 image coding for quality and utility applications, as well as to improve the understanding about the quality and utility estimators used in this work. This work develops a rate-allocation algorithm for arbitrary quality and utility estimators within the Post- Compression Rate-Distortion Optimization (PCRD-opt) framework in JPEG-2000 image coding. Performance of the JPEG-2000 image coder when used with a variety of utility and quality estimators is then assessed. The estimators fall into two broad classes, magnitude-dependent (MSE, GMSE and NICE) and magnitudeindependent (SSIM and VIF). They further differ on their use of the low-frequency image content in computing their estimates. The impact of these computational differences is analyzed across a range of images and bit rates. In general, performance of the JPEG-2000 coder below 1.6 bits/pixel with any of these estimators is highly content dependent, with the most relevant content being the amount of texture in an image and whether the strongest gradients in an image correspond to the main contours of the scene. Above 1.6 bits/pixel, all estimators produce visually equivalent images. As a result, the MSE estimator provides the most consistent performance across all images, while specific

  6. A quantitative method for visual phantom image quality evaluation

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.; Liu, Xiong; O'Shea, Michael; Toto, Lawrence C.

    2000-04-01

    This work presents an image quality evaluation technique for uniform-background target-object phantom images. The Degradation-Comparison-Threshold (DCT) method involves degrading the image quality of a target-containing region with a blocking processing and comparing the resulting image to a similarly degraded target-free region. The threshold degradation needed for 92% correct detection of the target region is the image quality measure of the target. Images of American College of Radiology (ACR) mammography accreditation program phantom were acquired under varying x-ray conditions on a digital mammography machine. Five observers performed ACR and DCT evaluations of the images. A figure-of-merit (FOM) of an evaluation method was defined which takes into account measurement noise and the change of the measure as a function of x-ray exposure to the phantom. The FOM of the DCT method was 4.1 times that of the ACR method for the specks, 2.7 times better for the fibers and 1.4 times better for the masses. For the specks, inter-reader correlations on the same image set increased significantly from 87% for the ACR method to 97% for the DCT method. The viewing time per target for the DCT method was 3 - 5 minutes. The observed greater sensitivity of the DCT method could lead to more precise Quality Control (QC) testing of digital images, which should improve the sensitivity of the QC process to genuine image quality variations. Another benefit of the method is that it can measure the image quality of high detectability target objects, which is impractical by existing methods.

  7. Perceived no reference image quality measurement for chromatic aberration

    NASA Astrophysics Data System (ADS)

    Lamb, Anupama B.; Khambete, Madhuri

    2016-03-01

    Today there is need for no reference (NR) objective perceived image quality measurement techniques as conducting subjective experiments and making reference image available is a very difficult task. Very few NR perceived image quality measurement algorithms are available for color distortions like chromatic aberration (CA), color quantization with dither, and color saturation. We proposed NR image quality assessment (NR-IQA) algorithms for images distorted with CA. CA is mostly observed in images taken with digital cameras, having higher sensor resolution with inexpensive lenses. We compared our metric performance with two state-of-the-art NR blur techniques, one full reference IQA technique and three general-purpose NR-IQA techniques, although they are not tailored for CA. We used a CA dataset in the TID-2013 color image database to evaluate performance. Proposed algorithms give comparable performance with state-of-the-art techniques in terms of performance parameters and outperform them in terms of monotonicity and computational complexity. We have also discovered that the proposed CA algorithm best predicts perceived image quality of images distorted with realistic CA.

  8. Figure of Image Quality and Information Capacity in Digital Mammography

    PubMed Central

    Michail, Christos M.; Kalyvas, Nektarios E.; Valais, Ioannis G.; Fudos, Ioannis P.; Fountos, George P.; Dimitropoulos, Nikos; Kandarakis, Ioannis S.

    2014-01-01

    Objectives. In this work, a simple technique to assess the image quality characteristics of the postprocessed image is developed and an easy to use figure of image quality (FIQ) is introduced. This FIQ characterizes images in terms of resolution and noise. In addition information capacity, defined within the context of Shannon's information theory, was used as an overall image quality index. Materials and Methods. A digital mammographic image was postprocessed with three digital filters. Resolution and noise were calculated via the Modulation Transfer Function (MTF), the coefficient of variation, and the figure of image quality. In addition, frequency dependent parameters such as the noise power spectrum (NPS) and noise equivalent quanta (NEQ) were estimated and used to assess information capacity. Results. FIQs for the “raw image” data and the image processed with the “sharpen edges” filter were found 907.3 and 1906.1, correspondingly. The information capacity values were 60.86 × 103 and 78.96 × 103 bits/mm2. Conclusion. It was found that, after the application of the postprocessing techniques (even commercial nondedicated software) on the raw digital mammograms, MTF, NPS, and NEQ are improved for medium to high spatial frequencies leading to resolving smaller structures in the final image. PMID:24895593

  9. Scanner-based image quality measurement system for automated analysis of EP output

    NASA Astrophysics Data System (ADS)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  10. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  11. Image quality evaluation and control of computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Hiroshi; Yamaguchi, Takeshi; Uetake, Hiroki

    2016-03-01

    Image quality of the computer-generated holograms are usually evaluated subjectively. For example, the re- constructed image from the hologram is compared with other holograms, or evaluated by the double-stimulus impairment scale method to compare with the original image. This paper proposes an objective image quality evaluation of a computer-generated hologram by evaluating both diffraction efficiency and peak signal-to-noise ratio. Theory and numerical experimental results are shown on Fourier transform transmission hologram of both amplitude and phase modulation. Results without the optimized random phase show that the amplitude transmission hologram gives better peak signal-to noise ratio, but the phase transmission hologram provides about 10 times higher diffraction efficiency to the amplitude type. As an optimized phase hologram, Kinoform is evaluated. In addition, we investigate to control image quality by non-linear operation.

  12. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  13. Perceived quality of wood images influenced by the skewness of image histogram

    NASA Astrophysics Data System (ADS)

    Katsura, Shigehito; Mizokami, Yoko; Yaguchi, Hirohisa

    2015-08-01

    The shape of image luminance histograms is related to material perception. We investigated how the luminance histogram contributed to improvements in the perceived quality of wood images by examining various natural wood and adhesive vinyl sheets with printed wood grain. In the first experiment, we visually evaluated the perceived quality of wood samples. In addition, we measured the colorimetric parameters of the wood samples and calculated statistics of image luminance. The relationship between visual evaluation scores and image statistics suggested that skewness and kurtosis affected the perceived quality of wood. In the second experiment, we evaluated the perceived quality of wood images with altered luminance skewness and kurtosis using a paired comparison method. Our result suggests that wood images are more realistic if the skewness of the luminance histogram is slightly negative.

  14. A feature-enriched completely blind image quality evaluator.

    PubMed

    Lin Zhang; Lei Zhang; Bovik, Alan C

    2015-08-01

    Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm.

  15. A feature-enriched completely blind image quality evaluator.

    PubMed

    Lin Zhang; Lei Zhang; Bovik, Alan C

    2015-08-01

    Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm. PMID:25915960

  16. The influence of novel CT reconstruction technique and ECG-gated technique on image quality and patient dose of cardiac computed tomography.

    PubMed

    Dyakov, I; Stoinova, V; Groudeva, V; Vassileva, J

    2015-07-01

    The aim of the present study was to compare image quality and patient dose in cardiac computed tomography angiography (CTA) in terms of volume computed tomography dose index (CTDI vol), dose length product (DLP) and effective dose, when changing from filtered back projection (FBP) to adaptive iterative dose reduction (AIDR) reconstruction techniques. Further aim was to implement prospective electrocardiogram (ECG) gating for patient dose reduction. The study was performed with Aquilion ONE 320-row CT of Toshiba Medical Systems. Analysis of cardiac CT protocols was performed before and after integration of the new software. The AIDR technique showed more than 50 % reduction in CTDIvol values and 57 % in effective dose. The subjective evaluation of clinical images confirmed the adequate image quality acquired by the AIDR technique. The preliminary results indicated significant dose reduction when using prospective ECG gating by keeping the adequate diagnostic quality of clinical images. PMID:25836680

  17. Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc

    2015-03-01

    The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.

  18. Pre-analytic process control: projecting a quality image.

    PubMed

    Serafin, Mark D

    2006-01-01

    Within the health-care system, the term "ancillary department" often describes the laboratory. Thus, laboratories may find it difficult to define their image and with it, customer perception of department quality. Regulatory requirements give laboratories who so desire an elegant way to address image and perception issues--a comprehensive pre-analytic system solution. Since large laboratories use such systems--laboratory service manuals--I describe and illustrate the process for the benefit of smaller facilities. There exist resources to help even small laboratories produce a professional service manual--an elegant solution to image and customer perception of quality. PMID:17005095

  19. Objective assessment of phantom image quality in mammography: a feasibility study.

    PubMed

    Castellano Smith, A D; Castellano Smith, I A; Dance, D R

    1998-01-01

    The need for test objects in mammography quality control programmes to provide an objective measure of image quality pertinent to clinical problems is well documented. However, interobserver variations may be greater than the fluctuations in image quality that the quality control programme is seeking to detect. We have developed a computer algorithm to score a number of features in the Leeds TOR(MAX) mammography phantom. Threshold scoring techniques have been applied in the first instance; scoring schemes which utilize measures such as signal-to-noise ratio and modulation have also been formulated. This fully automatic algorithm has been applied to a set of 10 films which have been digitized at 25 microns resolution using a Joyce-Loebl scanning microdensitometer. The films were chosen retrospectively from quality control test films to demonstrate: (a) a range of optimized imaging systems, and (b) variation from the optimum. The performance of the algorithm has been compared with that of five experienced observers, and has been shown to be as consistent as individual observers, but more consistent than a pool of observers. Problems have been encountered with the detection of small details, indicating that a more sophisticated localization technique is desirable. The computer performs more successfully with the scoring scheme which utilizes the full imaging information available, rather than with the threshold-determined one. However, both the observers and the computer algorithm failed to identify the non-optimum films, suggesting that the sensitivity of the TOR(MAX) test object may not be adequate for modern mammography imaging systems. PMID:9534699

  20. The effect of image quality and forensic expertise in facial image comparisons.

    PubMed

    Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice

    2015-03-01

    Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness. PMID:25537273

  1. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  2. APQ-102 imaging radar digital image quality study

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1982-01-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  3. Peripheral Aberrations and Image Quality for Contact Lens Correction

    PubMed Central

    Shen, Jie; Thibos, Larry N.

    2011-01-01

    Purpose Contact lenses reduced the degree of hyperopic field curvature present in myopic eyes and rigid contact lenses reduced sphero-cylindrical image blur on the peripheral retina, but their effect on higher order aberrations and overall optical quality of the eye in the peripheral visual field is still unknown. The purpose of our study was to evaluate peripheral wavefront aberrations and image quality across the visual field before and after contact lens correction. Methods A commercial Hartmann-Shack aberrometer was used to measure ocular wavefront errors in 5° steps out to 30° of eccentricity along the horizontal meridian in uncorrected eyes and when the same eyes are corrected with soft or rigid contact lenses. Wavefront aberrations and image quality were determined for the full elliptical pupil encountered in off-axis measurements. Results Ocular higher-order aberrations increase away from fovea in the uncorrected eye. Third-order aberrations are larger and increase faster with eccentricity compared to the other higher-order aberrations. Contact lenses increase all higher-order aberrations except 3rd-order Zernike terms. Nevertheless, a net increase in image quality across the horizontal visual field for objects located at the foveal far point is achieved with rigid lenses, whereas soft contact lenses reduce image quality. Conclusions Second order aberrations limit image quality more than higher-order aberrations in the periphery. Although second-order aberrations are reduced by contact lenses, the resulting gain in image quality is partially offset by increased amounts of higher-order aberrations. To fully realize the benefits of correcting higher-order aberrations in the peripheral field requires improved correction of second-order aberrations as well. PMID:21873925

  4. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  5. Thematic Mapper image quality: Preliminary results

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C.; Card, D. H.; Hlavka, C. A.; Likens, W. C.; Mertz, F. C.; Hall, J. R.

    1983-01-01

    Based on images analyzed so far, the band to band registration accuracy of the thematic mapper is very good. For bands within the same focal plane, the mean misregistrations are well within the specification, 0.2 pixels. For bands between the cooled and uncooled focal planes, there is a consistent mean misregistration of 0.5 pixels along-scan and 0.2-0.3 pixels across-scan. It exceeds the permitted 0.3 pixels for registration of bands between focal planes. If the mean misregistrations were removed by the data processing software, an analysis of the standard deviation of the misregistration indicates all band combinations would meet the registration specifications except for those including the thermal band. Analysis of the periodic noise in one image indicates a noise component in band 1 with a spatial frequency equivalent to 3.2 pixels in the along-scan direction.

  6. Validation of no-reference image quality index for the assessment of digital mammographic images

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  7. Characterization of the image quality in neutron radioscopy

    NASA Astrophysics Data System (ADS)

    Brunner, J.; Engelhardt, M.; Frei, G.; Gildemeister, A.; Lehmann, E.; Hillenbach, A.; Schillinger, B.

    2005-04-01

    Neutron radioscopy, or dynamic neutron radiography, is a non-destructive testing method, which has made big steps in the last years. Depending on the neutron flux, the object and the detector, for single events a time resolution down to a few milliseconds is possible. In the case of repetitive processes the object can be synchronized with the detector and better statistics in the image can be reached by adding radiographies of the same phase with a time resolution down to 100 μs. By stepwise delaying the trigger signal a radiography movie can be composed. Radiography images of a combustion engine and an injection nozzle were evaluated quantitatively by different methods trying to characterize the image quality of an imaging system. The main factors which influence the image quality are listed and discussed.

  8. Method for image quality monitoring on digital television networks

    NASA Astrophysics Data System (ADS)

    Bretillon, Pierre; Baina, Jamal; Jourlin, Michel; Goudezeune, Gabriel

    1999-11-01

    This paper presents a method designed to monitor image quality. The emphasis is given here to the monitoring in digital television broadcasting networks, in order for the providers to ensure a 'user-oriented' Quality of Service. Most objective image quality assessment methods are technically very difficult to apply in this context because of bandwidth limitations. We propose a parametric, reduced reference method that relies on the evaluation of characteristic coding and transmission impairments with a set of features. We show that quality can be predicted with a satisfying correlation to a subjective evaluation by the combination of several impairment features in an appropriate model. The method has been implemented and tested in a range of situations on simulated and real DVB networks. This allows to conclude on the usefulness of the approach and our future developments for quality of service monitoring in digital television.

  9. Average glandular dose and phantom image quality in mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, M.; Nogueira, M. S.; Guedes, E.; Andrade, M. C.; Peixoto, J. E.; Joana, G. S.; Castro, J. G.

    2007-09-01

    Doses in mammography should be maintained as low as possible without reducing the high image quality needed for early detection of the breast cancer. The breast is composed of tissues with very close composition and densities. It increases the difficulty to detect small changes in the normal anatomical structures which may be associated with breast cancer. To achieve the standards of definition and contrast for mammography, the quality and intensity of the X-ray beam, the breast positioning and compression, the film-screen system, and the film processing have to be in optimal operational conditions. This study sought to evaluate average glandular dose (AGD) and image quality on a standard phantom in 134 mammography units in the state of Minas Gerais, Brazil, between December 2004 and May 2006. AGDs were obtained by means of entrance kerma measured with TL LiF100 dosimeters on phantom surface. Phantom images were obtained with automatic exposure technique, fixed 28 kV and molybdenum anode-filter combination. The phantom used contained structures simulating tumoral masses, microcalcifications, fibers and low contrast areas. High-resolution metallic meshes to assess image definition and a stepwedge to measure image contrast index were also inserted in the phantom. The visualization of simulated structures, the mean optical density and the contrast index allowed to classify the phantom image quality in a seven-point scale. The results showed that 54.5% of the facilities did not achieve the minimum performance level for image quality. It is mainly due to insufficient film processing observed in 61.2% of the units. AGD varied from 0.41 to 2.73 mGy with a mean value of 1.32±0.44 mGy. In all optimal quality phantom images, AGDs were in this range. Additionally, in 7.3% of the mammography units, the AGD constraint of 2 mGy was exceeded. One may conclude that dose level to patient and image quality are not in conformity to regulations in most of the facilities. This

  10. Toward the development of an image quality tool for active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Weatherall, James C.; Greca, Joseph; Smith, Barry T.

    2015-05-01

    Preliminary design considerations for an image quality tool to complement millimeter wave imaging systems are presented. The tool is planned for use in confirming operating parameters; confirmation of continuity for imaging component design changes, and analysis of new components and detection algorithms. Potential embodiments of an image quality tool may contain materials that mimic human skin in order to provide a realistic signal return for testing, which may also help reduce or eliminate the need for mock passengers for developmental testing. Two candidate materials, a dielectric liquid and an iron-loaded epoxy, have been identified and reflection measurements have been performed using laboratory systems in the range 18 - 40 GHz. Results show good agreement with both laboratory and literature data on human skin, particularly in the range of operation of two commercially available millimeter wave imaging systems. Issues related to the practical use of liquids and magnetic materials for image quality tools are discussed.

  11. Evaluation of image quality in computed radiography based mammography systems

    NASA Astrophysics Data System (ADS)

    Singh, Abhinav; Bhwaria, Vipin; Valentino, Daniel J.

    2011-03-01

    Mammography is the most widely accepted procedure for the early detection of breast cancer and Computed Radiography (CR) is a cost-effective technology for digital mammography. We have demonstrated that CR mammography image quality is viable for Digital Mammography. The image quality of mammograms acquired using Computed Radiography technology was evaluated using the Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE). The measurements were made using a 28 kVp beam (RQA M-II) using 2 mm of Al as a filter and a target/filter combination of Mo/Mo. The acquired image bit depth was 16 bits and the pixel pitch for scanning was 50 microns. A Step-Wedge phantom (to measure the Contrast-to-noise ratio (CNR)) and the CDMAM 3.4 Contrast Detail phantom were also used to assess the image quality. The CNR values were observed at varying thickness of PMMA. The CDMAM 3.4 phantom results were plotted and compared to the EUREF acceptable and achievable values. The effect on image quality was measured using the physics metrics. A lower DQE was observed even with a higher MTF. This could be possibly due to a higher noise component present due to the way the scanner was configured. The CDMAM phantom scores demonstrated a contrast-detail comparable to the EUREF values. A cost-effective CR machine was optimized for high-resolution and high-contrast imaging.

  12. Analysis of image quality for laser display scanner test

    NASA Astrophysics Data System (ADS)

    Specht, H.; Kurth, S.; Billep, D.; Gessner, T.

    2009-02-01

    The scanning laser display technology is one of the most promising technologies for highly integrated projection display applications (e. g. in PDAs, mobile phones or head mounted displays) due to its advantages regarding image quality, miniaturization level and low cost potential. As a couple of research teams found during their investigations on laser scanning projections systems, the image quality of such systems is - beside from laser source and video signal processing - crucially determined by the scan engine, including MEMS scanner, driving electronics, scanning regime and synchronization. Even though a number of technical parameters can be measured with high accuracy, the test procedure is challenging because the influence of these parameters on image quality is often insufficiently understood. Thus, in many cases it is not clear how to define limiting values for characteristic parameters. In this paper the relationship between parameters characterizing the scan engine and their influence on image quality will be discussed. Those include scanner topography, geometry of the path of light as well as trajectory parameters. Understanding this enables a new methodology for testing and characterization of the scan engine, based on evaluation of one or a series of projected test images. Due to the fact that the evaluation process can be easily automated by digital image processing this methodology has the potential to become integrated into the production process of laser displays.

  13. Use of a line-pair resolution phantom for comprehensive quality assurance of electronic portal imaging devices based on fundamental imaging metrics

    SciTech Connect

    Gopal, Arun; Samant, Sanjiv S.

    2009-06-15

    Image guided radiation therapy solutions based on megavoltage computed tomography (MVCT) involve the extension of electronic portal imaging devices (EPIDs) from their traditional role of weekly localization imaging and planar dose mapping to volumetric imaging for 3D setup and dose verification. To sustain the potential advantages of MVCT, EPIDs are required to provide improved levels of portal image quality. Therefore, it is vital that the performance of EPIDs in clinical use is maintained at an optimal level through regular and rigorous quality assurance (QA). Traditionally, portal imaging QA has been carried out by imaging calibrated line-pair and contrast resolution phantoms and obtaining arbitrarily defined QA indices that are usually dependent on imaging conditions and merely indicate relative trends in imaging performance. They are not adequately sensitive to all aspects of image quality unlike fundamental imaging metrics such as the modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) that are widely used to characterize detector performance in radiographic imaging and would be ideal for QA purposes. However, due to the difficulty of performing conventional MTF measurements, they have not been used for routine clinical QA. The authors present a simple and quick QA methodology based on obtaining the MTF, NPS, and DQE of a megavoltage imager by imaging standard open fields and a bar-pattern QA phantom containing 2 mm thick tungsten line-pair bar resolution targets. Our bar-pattern based MTF measurement features a novel zero-frequency normalization scheme that eliminates normalization errors typically associated with traditional bar-pattern measurements at megavoltage x-ray energies. The bar-pattern QA phantom and open-field images are used in conjunction with an automated image analysis algorithm that quickly computes the MTF, NPS, and DQE of an EPID system. Our approach combines the fundamental advantages of

  14. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  15. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  16. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  17. The influence of noise on image quality in phase-diverse coherent diffraction imaging

    NASA Astrophysics Data System (ADS)

    Wittler, H. P. A.; van Riessen, G. A.; Jones, M. W. M.

    2016-02-01

    Phase-diverse coherent diffraction imaging provides a route to high sensitivity and resolution with low radiation dose. To take full advantage of this, the characteristics and tolerable limits of measurement noise for high quality images must be understood. In this work we show the artefacts that manifest in images recovered from simulated data with noise of various characteristics in the illumination and diffraction pattern. We explore the limits at which images of acceptable quality can be obtained and suggest qualitative guidelines that would allow for faster data acquisition and minimize radiation dose.

  18. Exploratory survey of image quality on CR digital mammography imaging systems in Mexico.

    PubMed

    Gaona, E; Rivera, T; Arreola, M; Franco, J; Molina, N; Alvarez, B; Azorín, C G; Casian, G

    2014-01-01

    The purpose of this study was to assess the current status of image quality and dose in computed radiographic digital mammography (CRDM) systems. Studies included CRDM systems of various models and manufacturers which dose and image quality comparisons were performed. Due to the recent rise in the use of digital radiographic systems in Mexico, CRDM systems are rapidly replacing conventional film-screen systems without any regard to quality control or image quality standards. Study was conducted in 65 mammography facilities which use CRDM systems in the Mexico City and surrounding States. The systems were tested as used clinically. This means that the dose and beam qualities were selected using the automatic beam selection and photo-timed features. All systems surveyed generate laser film hardcopies for the radiologist to read on a scope or mammographic high luminance light box. It was found that 51 of CRDM systems presented a variety of image artefacts and non-uniformities arising from inadequate acquisition and processing, as well as from the laser printer itself. Undisciplined alteration of image processing settings by the technologist was found to be a serious prevalent problem in 42 facilities. Only four of them showed an image QC program which is periodically monitored by a medical physicist. The Average Glandular Dose (AGD) in the surveyed systems was estimated to have a mean value of 2.4 mGy. To improve image quality in mammography and make more efficient screening mammographic in early detection of breast cancer is required new legislation.

  19. Exploratory survey of image quality on CR digital mammography imaging systems in Mexico.

    PubMed

    Gaona, E; Rivera, T; Arreola, M; Franco, J; Molina, N; Alvarez, B; Azorín, C G; Casian, G

    2014-01-01

    The purpose of this study was to assess the current status of image quality and dose in computed radiographic digital mammography (CRDM) systems. Studies included CRDM systems of various models and manufacturers which dose and image quality comparisons were performed. Due to the recent rise in the use of digital radiographic systems in Mexico, CRDM systems are rapidly replacing conventional film-screen systems without any regard to quality control or image quality standards. Study was conducted in 65 mammography facilities which use CRDM systems in the Mexico City and surrounding States. The systems were tested as used clinically. This means that the dose and beam qualities were selected using the automatic beam selection and photo-timed features. All systems surveyed generate laser film hardcopies for the radiologist to read on a scope or mammographic high luminance light box. It was found that 51 of CRDM systems presented a variety of image artefacts and non-uniformities arising from inadequate acquisition and processing, as well as from the laser printer itself. Undisciplined alteration of image processing settings by the technologist was found to be a serious prevalent problem in 42 facilities. Only four of them showed an image QC program which is periodically monitored by a medical physicist. The Average Glandular Dose (AGD) in the surveyed systems was estimated to have a mean value of 2.4 mGy. To improve image quality in mammography and make more efficient screening mammographic in early detection of breast cancer is required new legislation. PMID:23938078

  20. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  1. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  2. Radiation dose and image quality for paediatric interventional cardiology

    NASA Astrophysics Data System (ADS)

    Vano, E.; Ubeda, C.; Leyton, F.; Miranda, P.

    2008-08-01

    Radiation dose and image quality for paediatric protocols in a biplane x-ray system used for interventional cardiology have been evaluated. Entrance surface air kerma (ESAK) and image quality using a test object and polymethyl methacrylate (PMMA) phantoms have been measured for the typical paediatric patient thicknesses (4-20 cm of PMMA). Images from fluoroscopy (low, medium and high) and cine modes have been archived in digital imaging and communications in medicine (DICOM) format. Signal-to-noise ratio (SNR), figure of merit (FOM), contrast (CO), contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) have been computed from the images. Data on dose transferred to the DICOM header have been used to test the values of the dosimetric display at the interventional reference point. ESAK for fluoroscopy modes ranges from 0.15 to 36.60 µGy/frame when moving from 4 to 20 cm PMMA. For cine, these values range from 2.80 to 161.10 µGy/frame. SNR, FOM, CO, CNR and HCSR are improved for high fluoroscopy and cine modes and maintained roughly constant for the different thicknesses. Cumulative dose at the interventional reference point resulted 25-45% higher than the skin dose for the vertical C-arm (depending of the phantom thickness). ESAK and numerical image quality parameters allow the verification of the proper setting of the x-ray system. Knowing the increases in dose per frame when increasing phantom thicknesses together with the image quality parameters will help cardiologists in the good management of patient dose and allow them to select the best imaging acquisition mode during clinical procedures.

  3. Determination of pasture quality using airborne hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Pullanagari, R. R.; Kereszturi, G.; Yule, Ian J.; Irwin, M. E.

    2015-10-01

    Pasture quality is a critical determinant which influences animal performance (live weight gain, milk and meat production) and animal health. Assessment of pasture quality is therefore required to assist farmers with grazing planning and management, benchmarking between seasons and years. Traditionally, pasture quality is determined by field sampling which is laborious, expensive and time consuming, and the information is not available in real-time. Hyperspectral remote sensing has potential to accurately quantify biochemical composition of pasture over wide areas in great spatial detail. In this study an airborne imaging spectrometer (AisaFENIX, Specim) was used with a spectral range of 380-2500 nm with 448 spectral bands. A case study of a 600 ha hill country farm in New Zealand is used to illustrate the use of the system. Radiometric and atmospheric corrections, along with automatized georectification of the imagery using Digital Elevation Model (DEM), were applied to the raw images to convert into geocoded reflectance images. Then a multivariate statistical method, partial least squares (PLS), was applied to estimate pasture quality such as crude protein (CP) and metabolisable energy (ME) from canopy reflectance. The results from this study revealed that estimates of CP and ME had a R2 of 0.77 and 0.79, and RMSECV of 2.97 and 0.81 respectively. By utilizing these regression models, spatial maps were created over the imaged area. These pasture quality maps can be used for adopting precision agriculture practices which improves farm profitability and environmental sustainability.

  4. Achieving quality in cardiovascular imaging: proceedings from the American College of Cardiology-Duke University Medical Center Think Tank on Quality in Cardiovascular Imaging.

    PubMed

    Douglas, Pamela; Iskandrian, Ami E; Krumholz, Harlan M; Gillam, Linda; Hendel, Robert; Jollis, James; Peterson, Eric; Chen, Jersey; Masoudi, Frederick; Mohler, Emile; McNamara, Robert L; Patel, Manesh R; Spertus, John

    2006-11-21

    Cardiovascular imaging has enjoyed both rapid technological advances and sustained growth, yet less attention has been focused on quality than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met, and this report provides an overview of the discussions. A consensus definition of quality in imaging and a convergence of opinion on quality measures across imaging modalities was achieved and are intended to be the start of a process culminating in the development, dissemination, and adoption of quality measures for all cardiovascular imaging modalities.

  5. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  6. Compressed image quality metric based on perceptually weighted distortion.

    PubMed

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  7. Flattening filter removal for improved image quality of megavoltage fluoroscopy

    SciTech Connect

    Christensen, James D.; Kirichenko, Alexander; Gayou, Olivier

    2013-08-15

    Purpose: Removal of the linear accelerator (linac) flattening filter enables a high rate of dose deposition with reduced treatment time. When used for megavoltage imaging, an unflat beam has reduced primary beam scatter resulting in sharper images. In fluoroscopic imaging mode, the unflat beam has higher photon count per image frame yielding higher contrast-to-noise ratio. The authors’ goal was to quantify the effects of an unflat beam on the image quality of megavoltage portal and fluoroscopic images.Methods: 6 MV projection images were acquired in fluoroscopic and portal modes using an electronic flat-panel imager. The effects of the flattening filter on the relative modulation transfer function (MTF) and contrast-to-noise ratio were quantified using the QC3 phantom. The impact of FF removal on the contrast-to-noise ratio of gold fiducial markers also was studied under various scatter conditions.Results: The unflat beam had improved contrast resolution, up to 40% increase in MTF contrast at the highest frequency measured (0.75 line pairs/mm). The contrast-to-noise ratio was increased as expected from the increased photon flux. The visualization of fiducial markers was markedly better using the unflat beam under all scatter conditions, enabling visualization of thin gold fiducial markers, the thinnest of which was not visible using the unflat beam.Conclusions: The removal of the flattening filter from a clinical linac leads to quantifiable improvements in the image quality of megavoltage projection images. These gains enable observers to more easily visualize thin fiducial markers and track their motion on fluoroscopic images.

  8. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  9. Body image quality of life in eating disorders

    PubMed Central

    Jáuregui Lobera, Ignacio; Bolaños Ríos, Patricia

    2011-01-01

    Purpose: The objective was to examine how body image affects quality of life in an eating-disorder (ED) clinical sample, a non-ED clinical sample, and a nonclinical sample. We hypothesized that ED patients would show the worst body image quality of life. We also hypothesized that body image quality of life would have a stronger negative association with specific ED-related variables than with other psychological and psychopathological variables, mainly among ED patients. On the basis of previous studies, the influence of gender on the results was explored, too. Patients and methods: The final sample comprised 70 ED patients (mean age 22.65 ± 7.76 years; 59 women and 11 men); 106 were patients with other psychiatric disorders (mean age 28.20 ± 6.52; 67 women and 39 men), and 135 were university students (mean age 21.57 ± 2.58; 81 women and 54 men), with no psychiatric history. After having obtained informed consent, the following questionnaires were administered: Body Image Quality of Life Inventory-Spanish version (BIQLI-SP), Eating Disorders Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results: The ED patients’ ratings on the BIQLI-SP were the lowest and negatively scored (BIQLI-SP means: +20.18, +5.14, and −6.18, in the student group, the non-ED patient group, and the ED group, respectively). The effect of body image on quality of life was more negative in the ED group in all items of the BIQLI-SP. Body image quality of life was negatively associated with specific ED-related variables, more than with other psychological and psychopathological variables, but not especially among ED patients. Conclusion: Body image quality of life was affected not only by specific pathologies related to body image disturbances, but also by other psychopathological syndromes. Nevertheless, the greatest effect was related to ED, and seemed to be more negative among men. This finding is the

  10. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  11. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  12. Mammography in New Zealand: radiation dose and image quality.

    PubMed

    Poletti, J L; Williamson, B D; Mitchell, A W

    1991-06-01

    The mean glandular doses to the breast, image quality and machine performance have been determined for all mammographic x-ray facilities in New Zealand, during 1988-89. For 30 mm and 45 mm phantoms the mean doses per film were 1.03 +/- 0.56 mGy and 1.97 +/- 1.06 mGy. These doses are within international guide-lines. Image quality (detection of simulated microcalcifications, and contrast-detail performance) was found to depend on focal spot size/FFD combination, breast thickness, and film processing. The best machines could resolve 0.2 mm aluminium oxide specks with the contact technique. The use of a grid improved image quality as did magnification. Extended cycle film processing reduced doses, but the claimed improvement in image quality was not apparent from our data. The machine calibration parameters kVp, HVL and timer accuracy were in general within accepted tolerances. Automatic exposure controls in some cases gave poor control of film density with changing breast thickness. PMID:1747087

  13. SCID: full reference spatial color image quality metric

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Herbin, M.; Zagrouba, E.

    2009-01-01

    The most used full reference image quality assessments are error-based methods. Thus, these measures are performed by pixel based difference metrics like Delta E ( E), MSE, PSNR, etc. Therefore, a local fidelity of the color is defined. However, these metrics does not correlate well with the perceived image quality. Indeed, they omit the properties of the HVS. Thus, they cannot be a reliable predictor of the perceived visual quality. All this metrics compute the differences pixel to pixel. Therefore, a local fidelity of the color is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we present a novel full reference color metric that is based on characteristics of the human visual system by considering the notion of adjacency. This metric called SCID for Spatial Color Image Difference, is more perceptually correlated than other color differences such as Delta E. The suggested full reference metric is generic and independent of image distortion type. It can be used in different application such as: compression, restoration, etc.

  14. Image quality, space-qualified UV interference filters

    NASA Technical Reports Server (NTRS)

    Mooney, Thomas A.

    1992-01-01

    The progress during the contract period is described. The project involved fabrication of image quality, space-qualified bandpass filters in the 200-350 nm spectral region. Ion-assisted deposition (IAD) was applied to produce stable, reasonably durable filter coatings on space compatible UV substrates. Thin film materials and UV transmitting substrates were tested for resistance to simulated space effects.

  15. Visual relevance of display image quality testing by photometric methods

    NASA Astrophysics Data System (ADS)

    Andren, Boerje; Breidne, Magnus; Hansson, L. A.; Persson, Bo

    1993-09-01

    The two major international test methods for evaluation of the image quality of video display terminals are the ISO 9241-3 international standard and the MPR test. In this paper we make an attempt to compare the visual relevance of these two test methods.

  16. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  17. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  18. A study of image quality for radar image processing. [synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.

    1982-01-01

    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.

  19. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  20. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures.

    PubMed

    Oszust, Mariusz

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  1. Image quality specification and maintenance for airborne SAR

    NASA Astrophysics Data System (ADS)

    Clinard, Mark S.

    2004-08-01

    Specification, verification, and maintenance of image quality over the lifecycle of an operational airborne SAR begin with the specification for the system itself. Verification of image quality-oriented specification compliance can be enhanced by including a specification requirement that a vendor provide appropriate imagery at the various phases of the system life cycle. The nature and content of the imagery appropriate for each stage of the process depends on the nature of the test, the economics of collection, and the availability of techniques to extract the desired information from the data. At the earliest lifecycle stages, Concept and Technology Development (CTD) and System Development and Demonstration (SDD), the test set could include simulated imagery to demonstrate the mathematical and engineering concepts being implemented thus allowing demonstration of compliance, in part, through simulation. For Initial Operational Test and Evaluation (IOT&E), imagery collected from precisely instrumented test ranges and targets of opportunity consisting of a priori or a posteriori ground-truthed cultural and natural features are of value to the analysis of product quality compliance. Regular monitoring of image quality is possible using operational imagery and automated metrics; more precise measurements can be performed with imagery of instrumented scenes, when available. A survey of image quality measurement techniques is presented along with a discussion of the challenges of managing an airborne SAR program with the scarce resources of time, money, and ground-truthed data. Recommendations are provided that should allow an improvement in the product quality specification and maintenance process with a minimal increase in resource demands on the customer, the vendor, the operational personnel, and the asset itself.

  2. Evaluation of image quality of a new CCD-based system for chest imaging

    NASA Astrophysics Data System (ADS)

    Sund, Patrik; Kheddache, Susanne; Mansson, Lars G.; Bath, Magnus; Tylen, Ulf

    2000-04-01

    The Imix radiography system (Qy Imix Ab, Finland)consists of an intensifying screen, optics, and a CCD camera. An upgrade of this system (Imix 2000) with a red-emitting screen and new optics has recently been released. The image quality of Imix (original version), Imix 200, and two storage-phosphor systems, Fuji FCR 9501 and Agfa ADC70 was evaluated in physical terms (DQE) and with visual grading of the visibility of anatomical structures in clinical images (141 kV). PA chest images of 50 healthy volunteers were evaluated by experienced radiologists. All images were evaluated on Siemens Simomed monitors, using the European Quality Criteria. The maximum DQE values for Imix, Imix 2000, Agfa and Fuji were 11%, 14%, 17% and 19%, respectively (141kV, 5μGy). Using the visual grading, the observers rated the systems in the following descending order. Fuji, Imix 2000, Agfa, and Imix. Thus, the upgrade to Imix 2000 resulted in higher DQE values and a significant improvement in clinical image quality. The visual grading agrees reasonably well with the DQE results; however, Imix 2000 received a better score than what could be expected from the DQE measurements. Keywords: CCD Technique, Chest Imaging, Digital Radiography, DQE, Image Quality, Visual Grading Analysis

  3. Comprehensive quality assurance phantom for cardiovascular imaging systems

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Jan P.

    1998-07-01

    With the advent of high heat loading capacity x-ray tubes, high frequency inverter type generators, and the use of spectral shaping filters, the automatic brightness/exposure control (ABC) circuit logic employed in the new generation of angiographic imaging equipment has been significantly reprogrammed. These new angiographic imaging systems are designed to take advantage of the power train capabilities to yield higher contrast images while maintaining, or lower, the patient exposure. Since the emphasis of the imaging system design has been significantly altered, the system performance parameters one is interested and the phantoms employed for the quality assurance must also change in order to properly evaluate the imaging capability of the cardiovascular imaging systems. A quality assurance (QA) phantom has been under development in this institution and was submitted to various interested organizations such as American Association of Physicists in Medicine (AAPM), Society for Cardiac Angiography & Interventions (SCA&I), and National Electrical Manufacturers Association (NEMA) for their review and input. At the same time, in an effort to establish a unified standard phantom design for the cardiac catheterization laboratories (CCL), SCA&I and NEMA have formed a joint work group in early 1997 to develop a suitable phantom. The initial QA phantom design has since been accepted to serve as the base phantom by the SCA&I- NEMA Joint Work Group (JWG) from which a comprehensive QA Phantom is being developed.

  4. Image-quality metrics for characterizing adaptive optics system performance.

    PubMed

    Brigantic, R T; Roggemann, M C; Bauer, K W; Welsh, B M

    1997-09-10

    Adaptive optics system (AOS) performance is a function of the system design, seeing conditions, and light level of the wave-front beacon. It is desirable to optimize the controllable parameters in an AOS to maximize some measure of performance. For this optimization to be useful, it is necessary that a set of image-quality metrics be developed that vary monotonically with the AOS performance under a wide variety of imaging environments. Accordingly, as conditions change, one can be confident that the computed metrics dictate appropriate system settings that will optimize performance. Three such candidate metrics are presented. The first is the Strehl ratio; the second is a novel metric that modifies the Strehl ratio by integration of the modulus of the average system optical transfer function to a noise-effective cutoff frequency at which some specified image spectrum signal-to-noise ratio level is attained; and the third is simply the cutoff frequency just mentioned. It is shown that all three metrics are correlated with the rms error (RMSE) between the measured image and the associated diffraction-limited image. Of these, the Strehl ratio and the modified Strehl ratio exhibit consistently high correlations with the RMSE across a broad range of conditions and system settings. Furthermore, under conditions that yield a constant average system optical transfer function, the modified Strehl ratio can still be used to delineate image quality, whereas the Strehl ratio cannot.

  5. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching.

    PubMed

    Abdul Ghani, Ahmad Shahrizan; Mat Isa, Nor Ashidi

    2014-01-01

    The quality of underwater image is poor due to the properties of water and its impurities. The properties of water cause attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper proposes a method of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kries hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction.

  6. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching.

    PubMed

    Abdul Ghani, Ahmad Shahrizan; Mat Isa, Nor Ashidi

    2014-01-01

    The quality of underwater image is poor due to the properties of water and its impurities. The properties of water cause attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper proposes a method of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kries hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction. PMID:25674483

  7. Effects of task and image properties on visual-attention deployment in image-quality assessment

    NASA Astrophysics Data System (ADS)

    Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid

    2015-03-01

    It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.

  8. Improvement of material decomposition and image quality in dual-energy radiography by reducing image noise

    NASA Astrophysics Data System (ADS)

    Lee, D.; Kim, Y.-s.; Choi, S.; Lee, H.; Choi, S.; Jo, B. D.; Jeon, P.-H.; Kim, H.; Kim, D.; Kim, H.; Kim, H.-J.

    2016-08-01

    Although digital radiography has been widely used for screening human anatomical structures in clinical situations, it has several limitations due to anatomical overlapping. To resolve this problem, dual-energy imaging techniques, which provide a method for decomposing overlying anatomical structures, have been suggested as alternative imaging techniques. Previous studies have reported several dual-energy techniques, each resulting in different image qualities. In this study, we compared three dual-energy techniques: simple log subtraction (SLS), simple smoothing of a high-energy image (SSH), and anti-correlated noise reduction (ACNR) with respect to material thickness quantification and image quality. To evaluate dual-energy radiography, we conducted Monte Carlo simulation and experimental phantom studies. The Geant 4 Application for Tomographic Emission (GATE) v 6.0 and tungsten anode spectral model using interpolation polynomials (TASMIP) codes were used for simulation studies and digital radiography, and human chest phantoms were used for experimental studies. The results of the simulation study showed improved image contrast-to-noise ratio (CNR) and coefficient of variation (COV) values and bone thickness estimation accuracy by applying the ACNR and SSH methods. Furthermore, the chest phantom images showed better image quality with the SSH and ACNR methods compared to the SLS method. In particular, the bone texture characteristics were well-described by applying the SSH and ACNR methods. In conclusion, the SSH and ACNR methods improved the accuracy of material quantification and image quality in dual-energy radiography compared to SLS. Our results can contribute to better diagnostic capabilities of dual-energy images and accurate material quantification in various clinical situations.

  9. Effect of exercise supplementation on dipyridamole thallium-201 image quality

    SciTech Connect

    Stern, S.; Greenberg, I.D.; Corne, R. )

    1991-08-01

    To determine the effect of different types of exercise supplementation on dipyridamole thallium image quality, 78 patients were prospectively randomized to one of three protocols: dipyridamole infusion alone, dipyridamole supplemented with isometric handgrip, and dipyridamole with low-level treadmill exercise. Heart-to-lung, heart-to-liver, and heart-to-adjacent infradiaphragmatic activity ratios were generated from anterior images acquired immediately following the test. Additionally, heart-to-total infradiaphragmatic activity was graded semiquantitatively. Results showed a significantly higher ratio of heart to subdiaphragmatic activity in the treadmill group as compared with dipyridamole alone (p less than 0.001) and dipyridamole supplemented with isometric handgrip exercise (p less than 0.001). No significant difference was observed between patients receiving the dipyridamole infusion, and dipyridamole supplemented with isometric handgrip exercise. The authors conclude that low-level treadmill exercise supplementation of dipyridamole infusion is an effective means of improving image quality. Supplementation with isometric handgrip does not improve image quality over dipyridamole alone.

  10. Metal artifact reduction and image quality evaluation of lumbar spine CT images using metal sinogram segmentation.

    PubMed

    Kaewlek, Titipong; Koolpiruck, Diew; Thongvigitmanee, Saowapak; Mongkolsuk, Manus; Thammakittiphan, Sastrawut; Tritrakarn, Siri-on; Chiewvit, Pipat

    2015-01-01

    Metal artifacts often appear in the images of computed tomography (CT) imaging. In the case of lumbar spine CT images, artifacts disturb the images of critical organs. These artifacts can affect the diagnosis, treatment, and follow up care of the patient. One approach to metal artifact reduction is the sinogram completion method. A mixed-variable thresholding (MixVT) technique to identify the suitable metal sinogram is proposed. This technique consists of four steps: 1) identify the metal objects in the image by using k-mean clustering with the soft cluster assignment, 2) transform the image by separating it into two sinograms, one of which is the sinogram of the metal object, with the surrounding tissue shown in the second sinogram. The boundary of the metal sinogram is then found by the MixVT technique, 3) estimate the new value of the missing data in the metal sinogram by linear interpolation from the surrounding tissue sinogram, 4) reconstruct a modified sinogram by using filtered back-projection and complete the image by adding back the image of the metal object into the reconstructed image to form the complete image. The quantitative and clinical image quality evaluation of our proposed technique demonstrated a significant improvement in image clarity and detail, which enhances the effectiveness of diagnosis and treatment.

  11. Image Quality Analysis of Various Gastrointestinal Endoscopes: Why Image Quality Is a Prerequisite for Proper Diagnostic and Therapeutic Endoscopy.

    PubMed

    Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo; Cho, Joo Young

    2015-09-01

    Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality.

  12. Image Quality Analysis of Various Gastrointestinal Endoscopes: Why Image Quality Is a Prerequisite for Proper Diagnostic and Therapeutic Endoscopy

    PubMed Central

    Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo

    2015-01-01

    Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality. PMID:26473119

  13. Spread spectrum image watermarking based on perceptual quality metric.

    PubMed

    Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi

    2011-11-01

    Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.

  14. Quality assessment of butter cookies applying multispectral imaging.

    PubMed

    Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne

    2013-07-01

    A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4-16 min and 160-200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400-700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center.

  15. Study on classification of pork quality using hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Bai, Jun; Wang, Haibin

    2015-12-01

    The relative problems' research of chilled meat, thawed meat and spoiled meat discrimination by hyperspectral image technique were proposed, such the section of feature wavelengths, et al. First, based on 400 ~ 1000nm range hyperspectral image data of testing pork samples, by K-medoids clustering algorithm based on manifold distance, we select 30 important wavelengths from 753 wavelengths, and thus select 8 feature wavelengths (454.4, 477.5, 529.3, 546.8, 568.4, 580.3, 589.9 and 781.2nm) based on the discrimination value. Then 8 texture features of each image under 8 feature wavelengths were respectively extracted by two-dimensional Gabor wavelets transform as pork quality feature. Finally, we build a pork quality classification model using the fuzzy C-mean clustering algorithm. Through the experiment of extracting feature wavelengths, we found that although the hyperspectral images between adjacent bands have a strong linear correlation, they show a significant non-linear manifold relationship from the entire band. K-medoids clustering algorithm based on manifold distance used in this paper for selecting the characteristic wavelengths, which is more reasonable than traditional principal component analysis (PCA). Through the classification result, we conclude that hyperspectral imaging technology can distinguish among chilled meat, thawed meat and spoiled meat accurately.

  16. Characterization of image quality and image-guidance performance of a preclinical microirradiator

    SciTech Connect

    Clarkson, R.; Lindsay, P. E.; Ansell, S.; Wilson, G.; Jelveh, S.; Hill, R. P.; Jaffray, D. A.

    2011-02-15

    Purpose: To assess image quality and image-guidance capabilities of a cone-beam CT based small-animal image-guided irradiation unit (micro-IGRT). Methods: A micro-IGRT system has been developed in collaboration with the authors' laboratory as a means to study the radiobiological effects of conformal radiation dose distributions in small animals. The system, the X-Rad 225Cx, consists of a 225 kVp x-ray tube and a flat-panel amorphous silicon detector mounted on a rotational C-arm gantry and is capable of both fluoroscopic x-ray and cone-beam CT imaging, as well as image-guided placement of the radiation beams. Image quality (voxel noise, modulation transfer, CT number accuracy, and geometric accuracy characteristics) was assessed using water cylinder and micro-CT test phantoms. Image guidance was tested by analyzing the dose delivered to radiochromic films fixed to BB's through the end-to-end process of imaging, targeting the center of the BB, and irradiation of the film/BB in order to compare the offset between the center of the field and the center of the BB. Image quality and geometric studies were repeated over a 5-7 month period to assess stability. Results: CT numbers reported were found to be linear (R{sup 2}{>=}0.998) and the noise for images of homogeneous water phantom was 30 HU at imaging doses of approximately 1 cGy (to water). The presampled MTF at 50% and 10% reached 0.64 and 1.35 mm{sup -1}, respectively. Targeting accuracy by means of film irradiations was shown to have a mean displacement error of [{Delta}x,{Delta}y,{Delta}z]=[-0.12,-0.05,-0.02] mm, with standard deviations of [0.02, 0.20, 0.17] mm. The system has proven to be stable over time, with both the image quality and image-guidance performance being reproducible for the duration of the studies. Conclusions: The micro-IGRT unit provides soft-tissue imaging of small-animal anatomy at acceptable imaging doses ({<=}1 cGy). The geometric accuracy and targeting systems permit dose placement with

  17. Characterization of image quality and image-guidance performance of a preclinical microirradiator

    PubMed Central

    Clarkson, R.; Lindsay, P. E.; Ansell, S.; Wilson, G.; Jelveh, S.; Hill, R. P.; Jaffray, D. A.

    2011-01-01

    Purpose: To assess image quality and image-guidance capabilities of a cone-beam CT based small-animal image-guided irradiation unit (micro-IGRT). Methods: A micro-IGRT system has been developed in collaboration with the authors’ laboratory as a means to study the radiobiological effects of conformal radiation dose distributions in small animals. The system, the X-Rad 225Cx, consists of a 225 kVp x-ray tube and a flat-panel amorphous silicon detector mounted on a rotational C-arm gantry and is capable of both fluoroscopic x-ray and cone-beam CT imaging, as well as image-guided placement of the radiation beams. Image quality (voxel noise, modulation transfer, CT number accuracy, and geometric accuracy characteristics) was assessed using water cylinder and micro-CT test phantoms. Image guidance was tested by analyzing the dose delivered to radiochromic films fixed to BB’s through the end-to-end process of imaging, targeting the center of the BB, and irradiation of the film∕BB in order to compare the offset between the center of the field and the center of the BB. Image quality and geometric studies were repeated over a 5–7 month period to assess stability. Results: CT numbers reported were found to be linear (R2≥0.998) and the noise for images of homogeneous water phantom was 30 HU at imaging doses of approximately 1 cGy (to water). The presampled MTF at 50% and 10% reached 0.64 and 1.35 mm−1, respectively. Targeting accuracy by means of film irradiations was shown to have a mean displacement error of [Δx,Δy,Δz]=[−0.12,−0.05,−0.02] mm, with standard deviations of [0.02, 0.20, 0.17] mm. The system has proven to be stable over time, with both the image quality and image-guidance performance being reproducible for the duration of the studies. Conclusions: The micro-IGRT unit provides soft-tissue imaging of small-animal anatomy at acceptable imaging doses (≤1 cGy). The geometric accuracy and targeting systems permit dose placement with submillimeter

  18. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  19. DES exposure checker: Dark Energy Survey image quality control crowdsourcer

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Sheldon, Erin; Drlica-Wagner, Alex; Rykoff, Eli S.

    2015-11-01

    DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

  20. 34 CFR 85.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Adequate evidence. 85.900 Section 85.900 Education Office of the Secretary, Department of Education GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 85.900 Adequate evidence. Adequate evidence means information sufficient to support...

  1. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  2. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  3. 12 CFR 380.52 - Adequate protection.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Adequate protection. 380.52 Section 380.52... ORDERLY LIQUIDATION AUTHORITY Receivership Administrative Claims Process § 380.52 Adequate protection. (a... interest of a claimant, the receiver shall provide adequate protection by any of the following means:...

  4. 21 CFR 1404.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Adequate evidence. 1404.900 Section 1404.900 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 1404.900 Adequate evidence. Adequate evidence means information sufficient...

  5. SU-E-J-38: Improved DRR Image Quality Using Polyetheretherketone (PEEK) Fiducial in Image Guided Radiotherapy (IGRT)

    SciTech Connect

    Shen, S; Jacob, R; Popple, R; Duan, J; Wu, X; Cardan, R; Brezovich, I

    2015-06-15

    Purpose Fiducial-based imaging is often used in IGRT. Traditional gold fiducial marker often has substantial reconstruction artifacts. These artifacts Result in poor image quality of DRR for online kV-to-DRR matching. This study evaluated the image quality of PEEK in DRR in static and moving phantom. Methods CT scan of the Gold and PEEK fiducial (both 1×3 mm) was acquired in a 22 cm cylindrical phantom filled with water. Image artifacts was evaluated with maximum CT value deviated from water due to artifacts; volume of artifacts in 10×10 cm in the center slice; maximum length of streak artifacts from the fiducial. DRR resolution were measured using FWHM and FWTM. 4DCT of PEEK fiducial was acquired with the phantom moving sinusoidally in superior-inferior direction. Motion artifacts were assessed for various 4D phase angles. Results The maximum CT value deviation was −174 for Gold and −24 for PEEK. The volume of artifacts in a 10x10 cm 3 mm slice was 0.369 for Gold and 0.074 cm3 for PEEK. The maximum length of streak artifact was 80mm for Gold and 7 mm for PEEK. PEEK in DRR, FWHM was close to actual (1.0 mm for Gold and 1.1 mm for PEEK). FWTM was 1.8 mm for Gold and 1.3 mm for PEEK in DRR. Barrel motion artifact of PEEK fiducial was noticeable for free-breathing scan. The apparent PEEK length due to residual motion was in close agreement with the calculated length (13 mm for 30–70 phase, 10 mm in 40–60 phase). Conclusion Streak artifacts on planning CT associated with use of gold fiducial can be significantly reduced by PEEK fiducial, while having adequate kV image contrast. DRR image resolution at FWTM was improved from 1.8 mm to 1.3 mm. Because of this improvement, we have been routinely use PEEK for liver IGRT.

  6. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    NASA Astrophysics Data System (ADS)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  7. Exploring V1 by modeling the perceptual quality of images.

    PubMed

    Zhang, Fan; Jiang, Wenfei; Autrusseau, Florent; Lin, Weisi

    2014-01-24

    We propose an image quality model based on phase and amplitude differences between a reference and a distorted image. The proposed model is motivated by the fact that polar representations can separate visual information in a more independent and efficient manner than Cartesian representations in the primary visual cortex (V1). We subsequently estimate the model parameters from a large subjective data set using maximum likelihood methods. By comparing the various model hypotheses on the functional form about the phase and amplitude, we find that: (a) discrimination of visual orientation is important for quality assessment and yet a coarse level of such discrimination seems sufficient; and (b) a product-based amplitude-phase combination before pooling is effective, suggesting an interesting viewpoint about the functional structure of the simple cells and complex cells in V1.

  8. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  9. A quality assurance program for image quality of cone-beam CT guidance in radiation therapy

    SciTech Connect

    Bissonnette, Jean-Pierre; Moseley, Douglas J.; Jaffray, David A.

    2008-05-15

    The clinical introduction of volumetric x-ray image-guided radiotherapy systems necessitates formal commissioning of the hardware and image-guided processes to be used and drafts quality assurance (QA) for both hardware and processes. Satisfying both requirements provides confidence on the system's ability to manage geometric variations in patient setup and internal organ motion. As these systems become a routine clinical modality, the authors present data from their QA program tracking the image quality performance of ten volumetric systems over a period of 3 years. These data are subsequently used to establish evidence-based tolerances for a QA program. The volumetric imaging systems used in this work combines a linear accelerator with conventional x-ray tube and an amorphous silicon flat-panel detector mounted orthogonally from the accelerator central beam axis, in a cone-beam computed tomography (CBCT) configuration. In the spirit of the AAPM Report No. 74, the present work presents the image quality portion of their QA program; the aspects of the QA protocol addressing imaging geometry have been presented elsewhere. Specifically, the authors are presenting data demonstrating the high linearity of CT numbers, the uniformity of axial reconstructions, and the high contrast spatial resolution of ten CBCT systems (1-2 mm) from two commercial vendors. They are also presenting data accumulated over the period of several months demonstrating the long-term stability of the flat-panel detector and of the distances measured on reconstructed volumetric images. Their tests demonstrate that each specific CBCT system has unique performance. In addition, scattered x rays are shown to influence the imaging performance in terms of spatial resolution, axial reconstruction uniformity, and the linearity of CT numbers.

  10. Digital TV image quality improvement considering distributions of edge characteristic

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Gi; Kim, Jae-Chul; Park, Jong-Hyun

    2003-12-01

    Sharpness enhancement is widely used technique for improving the perceptual quality of an image by emphasizing its high-frequency component. In this paper, a psychophysical experiment is conducted by the 20 observers with simple linear unsharp masking for sharpness enhancement. The experimental result is extracted using z-score analysis and linear regression. Finally using this result we suggest observer preferable sharpness enhancement method for digital television.

  11. Factors Affecting Image Quality in Near-field Ultra-wideband Radar Imaging for Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Curtis, Charlotte

    Near-field ultra-wideband radar imaging has potential as a new breast imaging modality. While a number of reconstruction algorithms have been published with the goal of reducing undesired responses or clutter, an in-depth analysis of the dominant sources of clutter has not been conducted. In this thesis, time domain radar image reconstruction is demonstrated to be equivalent to frequency domain synthetic aperture radar. This reveals several assumptions inherent to the reconstruction algorithm related to radial spreading, point source antennas, and the independent summation of point scatterers. Each of these assumptions is examined in turn to determine which has the greatest impact on the resulting image quality and interpretation. In addition, issues related to heterogeneous and dispersive media are addressed. Variations in imaging parameters are tested by observing their influence on the system point spread function. Results are then confirmed by testing on simple and detailed simulation models, followed by data acquired from human volunteers. Recommended parameters are combined into a new imaging operator that is demonstrated to generate results comparable to a more accurate signal model, but with a 50 fold improvement in computational efficiency. Finally, the most significant factor affecting image quality is determined to be the estimate of tissue properties used to form the image.

  12. Incorporating detection tasks into the assessment of CT image quality

    NASA Astrophysics Data System (ADS)

    Scalzetti, E. M.; Huda, W.; Ogden, K. M.; Khan, M.; Roskopf, M. L.; Ogden, D.

    2006-03-01

    The purpose of this study was to compare traditional and task dependent assessments of CT image quality. Chest CT examinations were obtained with a standard protocol for subjects participating in a lung cancer-screening project. Images were selected for patients whose weight ranged from 45 kg to 159 kg. Six ABR certified radiologists subjectively ranked these images using a traditional six-point ranking scheme that ranged from 1 (inadequate) to 6 (excellent). Three subtle diagnostic tasks were identified: (1) a lung section containing a sub-centimeter nodule of ground-glass opacity in an upper lung (2) a mediastinal section with a lymph node of soft tissue density in the mediastinum; (3) a liver section with a rounded low attenuation lesion in the liver periphery. Each observer was asked to estimate the probability of detecting each type of lesion in the appropriate CT section using a six-point scale ranging from 1 (< 10%) to 6 (> 90%). Traditional and task dependent measures of image quality were plotted as a function of patient weight. For the lung section, task dependent evaluations were very similar to those obtained using the traditional scoring scheme, but with larger inter-observer differences. Task dependent evaluations for the mediastinal section showed no obvious trend with subject weight, whereas there the traditional score decreased from ~4.9 for smaller subjects to ~3.3 for the larger subjects. Task dependent evaluations for the liver section showed a decreasing trend from ~4.1 for the smaller subjects to ~1.9 for the larger subjects, whereas the traditional evaluation had a markedly narrower range of scores. A task-dependent method of assessing CT image quality can be implemented with relative ease, and is likely to be more meaningful in the clinical setting.

  13. Image quality criteria for wide-field x-ray imaging applications

    NASA Astrophysics Data System (ADS)

    Thompson, Patrick L.; Harvey, James E.

    1999-10-01

    For staring, wide-field applications, such as a solar x-ray imager, the severe off-axis aberrations of the classical Wolter Type-I grazing incidence x-ray telescope design drastically limits the 'resolution' near the solar limb. A specification upon on-axis fractional encircled energy is thus not an appropriate image quality criterion for such wide-angle applications. A more meaningful image quality criterion would be a field-weighted-average measure of 'resolution.' Since surface scattering effects from residual optical fabrication errors are always substantial at these very short wavelengths, the field-weighted-average half- power radius is a far more appropriate measure of aerial resolution. If an ideal mosaic detector array is being used in the focal plane, the finite pixel size provides a practical limit to this system performance. Thus, the total number of aerial resolution elements enclosed by the operational field-of-view, expressed as a percentage of the n umber of ideal detector pixels, is a further improved image quality criterion. In this paper we describe the development of an image quality criterion for wide-field applications of grazing incidence x-ray telescopes which leads to a new class of grazing incidence designs described in a following companion paper.

  14. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  15. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  16. Effects of characteristics of image quality in an immersive environment

    NASA Technical Reports Server (NTRS)

    Duh, Henry Been-Lirn; Lin, James J W.; Kenyon, Robert V.; Parker, Donald E.; Furness, Thomas A.

    2002-01-01

    Image quality issues such as field of view (FOV) and resolution are important for evaluating "presence" and simulator sickness (SS) in virtual environments (VEs). This research examined effects on postural stability of varying FOV, image resolution, and scene content in an immersive visual display. Two different scenes (a photograph of a fountain and a simple radial pattern) at two different resolutions were tested using six FOVs (30, 60, 90, 120, 150, and 180 deg.). Both postural stability, recorded by force plates, and subjective difficulty ratings varied as a function of FOV, scene content, and image resolution. Subjects exhibited more balance disturbance and reported more difficulty in maintaining posture in the wide-FOV, high-resolution, and natural scene conditions.

  17. ECG-synchronized DSA exposure control: improved cervicothoracic image quality

    SciTech Connect

    Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.

    1984-10-01

    An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.

  18. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  19. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  20. Image gathering and digital restoration for fidelity and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1991-01-01

    The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.

  1. Characterization of image quality for 3D scatter-corrected breast CT images

    NASA Astrophysics Data System (ADS)

    Pachon, Jan H.; Shah, Jainil; Tornai, Martin P.

    2011-03-01

    The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.

  2. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  3. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration.

    PubMed

    Saenz, Daniel L; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu's method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms. PMID:27494827

  4. Evaluation of scatter effects on image quality for breast tomosynthesis

    SciTech Connect

    Wu Gang; Mainprize, James G.; Boone, John M.; Yaffe, Martin J.

    2009-10-15

    Digital breast tomosynthesis uses a limited number (typically 10-20) of low-dose x-ray projections to produce a pseudo-three-dimensional volume tomographic reconstruction of the breast. The purpose of this investigation was to characterize and evaluate the effect of scattered radiation on the image quality for breast tomosynthesis. In a simulation, scatter point spread functions generated by a Monte Carlo simulation method were convolved over the breast projection to estimate the distribution of scatter for each angle of tomosynthesis projection. The results demonstrate that in the absence of scatter reduction techniques, images will be affected by cupping artifacts, and there will be reduced accuracy of attenuation values inferred from the reconstructed images. The effect of x-ray scatter on the contrast, noise, and lesion signal-difference-to-noise ratio (SDNR) in tomosynthesis reconstruction was measured as a function of the tumor size. When a with-scatter reconstruction was compared to one without scatter for a 5 cm compressed breast, the following results were observed. The contrast in the reconstructed central slice image of a tumorlike mass (14 mm in diameter) was reduced by 30%, the voxel value (inferred attenuation coefficient) was reduced by 28%, and the SDNR fell by 60%. The authors have quantified the degree to which scatter degrades the image quality over a wide range of parameters relevant to breast tomosynthesis, including x-ray beam energy, breast thickness, breast diameter, and breast composition. They also demonstrate, though, that even without a scatter rejection device, the contrast and SDNR in the reconstructed tomosynthesis slice are higher than those of conventional mammographic projection images acquired with a grid at an equivalent total exposure.

  5. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  6. SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes

    SciTech Connect

    Nelson, G

    2015-06-15

    Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth, Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.

  7. Image Quality of the Helioseismic and Magnetic Imager (HMI) Onboard the Solar Dynamics Observatory (SDO)

    NASA Technical Reports Server (NTRS)

    Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.

    2011-01-01

    We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.

  8. Restoration of images degraded by underwater turbulence using structure tensor oriented image quality (STOIQ) metric.

    PubMed

    Kanaev, A V; Hou, W; Restaino, S R; Matt, S; Gładysz, S

    2015-06-29

    Recent advances in image processing for atmospheric propagation have provided a foundation for tackling the similar but perhaps more complex problem of underwater imaging, which is impaired by scattering and optical turbulence. As a result of these impairments underwater imagery suffers from excessive noise, blur, and distortion. Underwater turbulence impact on light propagation becomes critical at longer distances as well as near thermocline and mixing layers. In this work, we demonstrate a method for restoration of underwater images that are severely degraded by underwater turbulence. The key element of the approach is derivation of a structure tensor oriented image quality metric, which is subsequently incorporated into a lucky patch image processing framework. The utility of the proposed image quality measure guided by local edge strength and orientation is emphasized by comparing the restoration results to an unsuccessful restoration obtained with equivalent processing utilizing a standard isotropic metric. Advantages of the proposed approach versus three other state-of-the-art image restoration techniques are demonstrated using the data obtained in the laboratory water tank and in a natural environment underwater experiment. Quantitative comparison of the restoration results is performed via structural similarity index measure and normalized mutual information metric.

  9. Restoration of images degraded by underwater turbulence using structure tensor oriented image quality (STOIQ) metric.

    PubMed

    Kanaev, A V; Hou, W; Restaino, S R; Matt, S; Gładysz, S

    2015-06-29

    Recent advances in image processing for atmospheric propagation have provided a foundation for tackling the similar but perhaps more complex problem of underwater imaging, which is impaired by scattering and optical turbulence. As a result of these impairments underwater imagery suffers from excessive noise, blur, and distortion. Underwater turbulence impact on light propagation becomes critical at longer distances as well as near thermocline and mixing layers. In this work, we demonstrate a method for restoration of underwater images that are severely degraded by underwater turbulence. The key element of the approach is derivation of a structure tensor oriented image quality metric, which is subsequently incorporated into a lucky patch image processing framework. The utility of the proposed image quality measure guided by local edge strength and orientation is emphasized by comparing the restoration results to an unsuccessful restoration obtained with equivalent processing utilizing a standard isotropic metric. Advantages of the proposed approach versus three other state-of-the-art image restoration techniques are demonstrated using the data obtained in the laboratory water tank and in a natural environment underwater experiment. Quantitative comparison of the restoration results is performed via structural similarity index measure and normalized mutual information metric. PMID:26191716

  10. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  11. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  12. Patient dose and image quality from mega-voltage cone beam computed tomography imaging.

    PubMed

    Gayou, Olivier; Parda, David S; Johnson, Mark; Miften, Moyed

    2007-02-01

    The evolution of ever more conformal radiation delivery techniques makes the subject of accurate localization of increasing importance in radiotherapy. Several systems can be utilized including kilo-voltage and mega-voltage cone-beam computed tomography (MV-CBCT), CT on rail or helical tomography. One of the attractive aspects of mega-voltage cone-beam CT is that it uses the therapy beam along with an electronic portal imaging device to image the patient prior to the delivery of treatment. However, the use of a photon beam energy in the mega-voltage range for volumetric imaging degrades the image quality and increases the patient radiation dose. To optimize image quality and patient dose in MV-CBCT imaging procedures, a series of dose measurements in cylindrical and anthropomorphic phantoms using an ionization chamber, radiographic films, and thermoluminescent dosimeters was performed. Furthermore, the dependence of the contrast to noise ratio and spatial resolution of the image upon the dose delivered for a 20-cm-diam cylindrical phantom was evaluated. Depending on the anatomical site and patient thickness, we found that the minimum dose deposited in the irradiated volume was 5-9 cGy and the maximum dose was between 9 and 17 cGy for our clinical MV-CBCT imaging protocols. Results also demonstrated that for high contrast areas such as bony anatomy, low doses are sufficient for image registration and visualization of the three-dimensional boundaries between soft tissue and bony structures. However, as the difference in tissue density decreased, the dose required to identify soft tissue boundaries increased. Finally, the dose delivered by MV-CBCT was simulated using a treatment planning system (TPS), thereby allowing the incorporation of MV-CBCT dose in the treatment planning process. The TPS-calculated doses agreed well with measurements for a wide range of imaging protocols.

  13. Using full-reference image quality metrics for automatic image sharpening

    NASA Astrophysics Data System (ADS)

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš

    2014-05-01

    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  14. Image quality evaluation of breast tomosynthesis with synchrotron radiation

    SciTech Connect

    Malliori, A.; Bliznakova, K.; Speller, R. D.; Horrocks, J. A.; Rigon, L.; Tromba, G.; Pallikarakis, N.

    2012-09-15

    Purpose: This study investigates the image quality of tomosynthesis slices obtained from several acquisition sets with synchrotron radiation using a breast phantom incorporating details that mimic various breast lesions, in a heterogeneous background. Methods: A complex Breast phantom (MAMMAX) with a heterogeneous background and thickness that corresponds to 4.5 cm compressed breast with an average composition of 50% adipose and 50% glandular tissue was assembled using two commercial phantoms. Projection images using acquisition arcs of 24 Degree-Sign , 32 Degree-Sign , 40 Degree-Sign , 48 Degree-Sign , and 56 Degree-Sign at incident energy of 17 keV were obtained from the phantom with the synchrotron radiation for medical physics beamline at ELETTRA Synchrotron Light Laboratory. The total mean glandular dose was set equal to 2.5 mGy. Tomograms were reconstructed with simple multiple projection algorithm (MPA) and filtered MPA. In the latter case, a median filter, a sinc filter, and a combination of those two filters were applied on the experimental data prior to MPA reconstruction. Visual inspection, contrast to noise ratio, contrast, and artifact spread function were the figures of merit used in the evaluation of the visualisation and detection of low- and high-contrast breast features, as a function of the reconstruction algorithm and acquisition arc. To study the benefits of using monochromatic beams, single projection images at incident energies ranging from 14 to 27 keV were acquired with the same phantom and weighted to synthesize polychromatic images at a typical incident x-ray spectrum with W target. Results: Filters were optimised to reconstruct features with different attenuation characteristics and dimensions. In the case of 6 mm low-contrast details, improved visual appearance as well as higher contrast to noise ratio and contrast values were observed for the two filtered MPA algorithms that exploit the sinc filter. These features are better visualized

  15. A hyperspectral imaging prototype for online quality evaluation of pickling cucumbers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A hyperspectral imaging prototype was developed for online evaluation of external and internal quality of pickling cucumbers. The prototype had several new, unique features including simultaneous reflectance and transmittance imaging and inline, real time calibration of hyperspectral images of each ...

  16. Dedicated dental volumetric and total body multislice computed tomography: a comparison of image quality and radiation dose

    NASA Astrophysics Data System (ADS)

    Strocchi, Sabina; Colli, Vittoria; Novario, Raffaele; Carrafiello, Gianpaolo; Giorgianni, Andrea; Macchi, Aldo; Fugazzola, Carlo; Conte, Leopoldo

    2007-03-01

    Aim of this work is to compare the performances of a Xoran Technologies i-CAT Cone Beam CT for dental applications with those of a standard total body multislice CT (Toshiba Aquilion 64 multislice) used for dental examinations. Image quality and doses to patients have been compared for the three main i-CAT protocols, the Toshiba standard protocol and a Toshiba modified protocol. Images of two phantoms have been acquired: a standard CT quality control phantom and an Alderson Rando ® anthropomorphic phantom. Image noise, Signal to Noise Ratio (SNR), Contrast to Noise Ratio (CNR) and geometric accuracy have been considered. Clinical image quality was assessed. Effective dose and doses to main head and neck organs were evaluated by means of thermo-luminescent dosimeters (TLD-100) placed in the anthropomorphic phantom. A Quality Index (QI), defined as the ratio of squared CNR to effective dose, has been evaluated. The evaluated effective doses range from 0.06 mSv (i-CAT 10 s protocol) to 2.37 mSv (Toshiba standard protocol). The Toshiba modified protocol (halved tube current, higher pitch value) imparts lower effective dose (0.99 mSv). The conventional CT device provides lower image noise and better SNR, but clinical effectiveness similar to that of dedicated dental CT (comparable CNR and clinical judgment). Consequently, QI values are much higher for this second CT scanner. No geometric distortion has been observed with both devices. As a conclusion, dental volumetric CT supplies adequate image quality to clinical purposes, at doses that are really lower than those imparted by a conventional CT device.

  17. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  18. Assessment of image quality in x-ray radiography imaging using a small plasma focus device

    NASA Astrophysics Data System (ADS)

    Kanani, A.; Shirani, B.; Jabbari, I.; Mokhtari, J.

    2014-08-01

    This paper offers a comprehensive investigation of image quality parameters for a small plasma focus as a pulsed hard x-ray source for radiography applications. A set of images were captured from some metal objects and electronic circuits using a low energy plasma focus at different voltages of capacitor bank and different pressures of argon gas. The x-ray source focal spot of this device was obtained to be about 0.6 mm using the penumbra imaging method. The image quality was studied by several parameters such as image contrast, line spread function (LSF) and modulation transfer function (MTF). Results showed that the contrast changes by variations in gas pressure. The best contrast was obtained at a pressure of 0.5 mbar and 3.75 kJ stored energy. The results of x-ray dose from the device showed that about 0.6 mGy is sufficient to obtain acceptable images on the film. The measurements of LSF and MTF parameters were carried out by means of a thin stainless steel wire 0.8 mm in diameter and the cut-off frequency was obtained to be about 1.5 cycles/mm.

  19. Image quality in CT: From physical measurements to model observers.

    PubMed

    Verdun, F R; Racine, D; Ott, J G; Tapiovaara, M J; Toroi, P; Bochud, F O; Veldkamp, W J H; Schegerer, A; Bouwman, R W; Giron, I Hernandez; Marshall, N W; Edyvean, S

    2015-12-01

    Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.

  20. Image quality in CT: From physical measurements to model observers.

    PubMed

    Verdun, F R; Racine, D; Ott, J G; Tapiovaara, M J; Toroi, P; Bochud, F O; Veldkamp, W J H; Schegerer, A; Bouwman, R W; Giron, I Hernandez; Marshall, N W; Edyvean, S

    2015-12-01

    Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment. PMID:26459319

  1. Spectral CT imaging in patients with Budd-Chiari syndrome: investigation of image quality.

    PubMed

    Su, Lei; Dong, Junqiang; Sun, Qiang; Liu, Jie; Lv, Peijie; Hu, Lili; Yan, Liangliang; Gao, Jianbo

    2014-11-01

    To assess the image quality of monochromatic imaging from spectral CT in patients with Budd-Chiari syndrome (BCS), fifty patients with BCS underwent spectral CT to generate conventional 140 kVp polychromatic images (group A) and monochromatic images, with energy levels from 40 to 80, 40 + 70, and 50 + 70 keV fusion images (group B) during the portal venous phase (PVP) and the hepatic venous phase (HVP). Two-sample t tests compared vessel-to-liver contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) for the portal vein (PV), hepatic vein (HV), inferior vena cava. Readers' subjective evaluations of the image quality were recorded. The highest SNR values in group B were distributed at 50 keV; the highest CNR values in group B were distributed at 40 keV. The higher CNR values and SNR values were obtained though PVP of PV (SNR 18.39 ± 6.13 vs. 10.56 ± 3.31, CNR 7.81 ± 3.40 vs. 3.58 ± 1.31) and HVP of HV (3.89 ± 2.08 vs. 1.27 ± 1.55) in the group B; the lower image noise for group B was at 70 keV and 50 + 70 keV (15.54 ± 8.39 vs. 18.40 ± 4.97, P = 0.0004 and 18.97 ± 7.61 vs. 18.40 ± 4.97, P = 0.0691); the results show that the 50 + 70 keV fusion image quality was better than that in group A. Monochromatic energy levels of 40-70, 40 + 70, and 50 + 70 keV fusion image can increase vascular contrast and that will be helpful for the diagnosis of BCS, we select the 50 + 70 keV fusion image to acquire the best BCS images.

  2. Automated techniques for quality assurance of radiological image modalities

    NASA Astrophysics Data System (ADS)

    Goodenough, David J.; Atkins, Frank B.; Dyer, Stephen M.

    1991-05-01

    This paper will attempt to identify many of the important issues for quality assurance (QA) of radiological modalities. It is of course to be realized that QA can span many aspects of the diagnostic decision making process. These issues range from physical image performance levels to and through the diagnostic decision of the radiologist. We will use as a model for automated approaches a program we have developed to work with computed tomography (CT) images. In an attempt to unburden the user, and in an effort to facilitate the performance of QA, we have been studying automated approaches. The ultimate utility of the system is its ability to render in a safe and efficacious manner, decisions that are accurate, sensitive, specific and which are possible within the economic constraints of modern health care delivery.

  3. SPOT4 HRVIR first in-flight image quality results

    NASA Astrophysics Data System (ADS)

    Kubik, Philippe; Breton, Eric; Meygret, Aime; Cabrieres, Bernard; Hazane, Philippe; Leger, Dominique

    1998-12-01

    The SPOT4 remote sensing satellite was successfully launched at the end of March 1998. It was designed first of all to guarantee continuity of SPOT services beyond the year 2000 but also to improve the mission. Its two cameras are now called HRVIR since a short-wave infrared (SWIR) spectral band has been added. Like their predecessor HRV cameras, they provide 20-meter multispectral and 10-meter monospectral images with a 60 km swath for nadir viewing. SPOT4's first two months of life in orbit were dedicated to the evaluation of its image quality performances. During this period of time, the CNES team used specific target programming in order to compute image correction parameters and estimate the performance, at system level, of the image processing chain. After a description of SPOT4 system requirements and new features of the HRVIR cameras, this paper focuses on the performance deduced from in-flight measurements, methods used and their accuracy: MTF measurements, refocusing, absolute calibration, signal-to-noise Ratio, location, focal plane cartography, dynamic disturbances.

  4. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  5. An automated system for numerically rating document image quality

    SciTech Connect

    Cannon, M.; Kelly, P.; Iyengar, S.S.; Brener, N.

    1997-04-01

    As part of the Department of Energy document declassification program, the authors have developed a numerical rating system to predict the OCR error rate that they expect to encounter when processing a particular document. The rating algorithm produces a vector containing scores for different document image attributes such as speckle and touching characters. The OCR error rate for a document is computed from a weighted sum of the elements of the corresponding quality vector. The predicted OCR error rate will be used to screen documents that would not be handled properly with existing document processing products.

  6. Virtual monochromatic imaging in dual-source dual-energy CT: Radiation dose and image quality

    SciTech Connect

    Yu Lifeng; Christner, Jodie A.; Leng Shuai; Wang Jia; Fletcher, Joel G.; McCollough, Cynthia H.

    2011-12-15

    Purpose: To evaluate the image quality of virtual monochromatic images synthesized from dual-source dual-energy computed tomography (CT) in comparison with conventional polychromatic single-energy CT for the same radiation dose. Methods: In dual-energy CT, besides the material-specific information, one may also synthesize monochromatic images at different energies, which can be used for routine diagnosis similar to conventional polychromatic single-energy images. In this work, the authors assessed whether virtual monochromatic images generated from dual-source CT scanners had an image quality similar to that of polychromatic single-energy images for the same radiation dose. First, the authors provided a theoretical analysis of the optimal monochromatic energy for either the minimum noise level or the highest iodine contrast to noise ratio (CNR) for a given patient size and dose partitioning between the low- and high-energy scans. Second, the authors performed an experimental study on a dual-source CT scanner to evaluate the noise and iodine CNR in monochromatic images. A thoracic phantom with three sizes of attenuating rings was used to represent four adult sizes. For each phantom size, three dose partitionings between the low-energy (80 kV) and the high-energy (140 kV) scans were used in the dual-energy scan. Monochromatic images at eight energies (40 to 110 keV) were generated for each scan. Phantoms were also scanned at each of the four polychromatic single energy (80, 100, 120, and 140 kV) with the same radiation dose. Results: The optimal virtual monochromatic energy depends on several factors: phantom size, partitioning of the radiation dose between low- and high-energy scans, and the image quality metrics to be optimized. With the increase of phantom size, the optimal monochromatic energy increased. With the increased percentage of radiation dose on the low energy scan, the optimal monochromatic energy decreased. When maximizing the iodine CNR in

  7. Image quality degradation and retrieval errors introduced by registration and interpolation of multispectral digital images

    SciTech Connect

    Henderson, B.G.; Borel, C.C.; Theiler, J.P.; Smith, B.W.

    1996-04-01

    Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the Modulation Transfer Function (MTF) and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.

  8. Influence of slice overlap on positron emission tomography image quality

    NASA Astrophysics Data System (ADS)

    McKeown, Clare; Gillen, Gerry; Dempsey, Mary Frances; Findlay, Caroline

    2016-02-01

    PET scans use overlapping acquisition beds to correct for reduced sensitivity at bed edges. The optimum overlap size for the General Electric (GE) Discovery 690 has not been established. This study assesses how image quality is affected by slice overlap. Efficacy of 23% overlaps (recommended by GE) and 49% overlaps (maximum possible overlap) were specifically assessed. European Association of Nuclear Medicine (EANM) guidelines for calculating minimum injected activities based on overlap size were also reviewed. A uniform flood phantom was used to assess noise (coefficient of variation, (COV)) and voxel accuracy (activity concentrations, Bq ml-1). A NEMA (National Electrical Manufacturers Association) body phantom with hot/cold spheres in a background activity was used to assess contrast recovery coefficients (CRCs) and signal to noise ratios (SNR). Different overlap sizes and sphere-to-background ratios were assessed. COVs for 49% and 23% overlaps were 9% and 13% respectively. This increased noise was difficult to visualise on the 23% overlap images. Mean voxel activity concentrations were not affected by overlap size. No clinically significant differences in CRCs were observed. However, visibility and SNR of small, low contrast spheres (⩽13 mm diameter, 2:1 sphere to background ratio) may be affected by overlap size in low count studies if they are located in the overlap area. There was minimal detectable influence on image quality in terms of noise, mean activity concentrations or mean CRCs when comparing 23% overlap with 49% overlap. Detectability of small, low contrast lesions may be affected in low count studies—however, this is a worst-case scenario. The marginal benefits of increasing overlap from 23% to 49% are likely to be offset by increased patient scan times. A 23% overlap is therefore appropriate for clinical use. An amendment to EANM guidelines for calculating injected activities is also proposed which better reflects the effect overlap size has

  9. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  10. Digital processing to improve image quality in real-time neutron radiography

    NASA Astrophysics Data System (ADS)

    Fujine, Shigenori; Yoneda, Kenji; Kanda, Keiji

    1985-01-01

    Real-time neutron radiography (NTV) has been used for practical applications at the Kyoto University Reactor (KUR). At present, however, the direct image from the TV system is still poor in resolution and low in contrast. In this paper several image improvements are demonstrated, such as a frame summing technique, which are effective in increasing image quality in neutron radiography. Image integration before the A/D converter has a beneficial effect on image quality and the high quality image reveals details invisible in direct images, such as: small holes by a reversed image, defects in a neutron converter screen through a high quality image, a moving object in a contoured image, a slight difference between two low-contrast images by a subtraction technique, and so on. For the real-time application a contouring operation and an averaging approach can also be utilized effectively.

  11. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min-1 with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels-1.

  12. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    SciTech Connect

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  13. Quality Enhancement and Nerve Fibre Layer Artefacts Removal in Retina Fundus Images by Off Axis Imaging

    SciTech Connect

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul; Li, Yaquin; Tobin Jr, Kenneth William; Chaum, Edward

    2011-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relative low cost, these cameras are employed worldwide by retina specialists to diagnose diabetic retinopathy and other degenerative diseases. Even with relative ease of use, the images produced by these systems sometimes suffer from reflectance artefacts mainly due to the nerve fibre layer (NFL) or other camera lens related reflections. We propose a technique that employs multiple fundus images acquired from the same patient to obtain a single higher quality image without these reflectance artefacts. The removal of bright artefacts, and particularly of NFL reflectance, can have great benefits for the reduction of false positives in the detection of retinal lesions such as exudate, drusens and cotton wool spots by automatic systems or manual inspection. If enough redundant information is provided by the multiple images, this technique also compensates for a suboptimal illumination. The fundus images are acquired in straightforward but unorthodox manner, i.e. the stare point of the patient is changed between each shot but the camera is kept fixed. Between each shot, the apparent shape and position of all the retinal structures that do not exhibit isotropic reflectance (e.g. bright artefacts) change. This physical effect is exploited by our algorithm in order to extract the pixels belonging to the inner layers of the retina, hence obtaining a single artefacts-free image.

  14. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  15. Human vision model for the objective evaluation of perceived image quality applied to MRI and image restoration

    NASA Astrophysics Data System (ADS)

    Salem, Kyle A.; Wilson, David L.

    2002-12-01

    We are developing a method to objectively quantify image quality and applying it to the optimization of interventional magnetic resonance imaging (iMRI). In iMRI, images are used for live-time guidance of interventional procedures such as the minimally invasive treatment of cancer. Hence, not only does one desire high quality images, but they must also be acquired quickly. In iMRI, images are acquired in the Fourier domain, or k-space, and this allows many creative ways to image quickly such as keyhole imaging where k-space is preferentially subsampled, yielding suboptimal images at very high frame rates. Other techniques include spiral, radial, and the combined acquisition technique. We have built a perceptual difference model (PDM) that incorporates various components of the human visual system. The PDM was validated using subjective image quality ratings by naive observers and task-based measures defined by interventional radiologists. Using the PDM, we investigated the effects of various imaging parameters on image quality and quantified the degradation due to novel imaging techniques. Results have provided significant information about imaging time versus quality tradeoffs aiding the MR sequence engineer. The PDM has also been used to evaluate other applications such as Dixon fat suppressed MRI and image restoration. In image restoration, the PDM has been used to evaluate the Generalized Minimal Residual (GMRES) image restoration method and to examine the ability to appropriately determine a stopping condition for such iterative methods. The PDM has been shown to be an objective tool for measuring image quality and can be used to determine the optimal methodology for various imaging applications.

  16. No-reference remote sensing image quality assessment using a comprehensive evaluation factor

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Wang, Xu; Li, Xiao; Shao, Xiaopeng

    2014-05-01

    The conventional image quality assessment algorithm, such as Peak Signal to Noise Ratio (PSNR), Mean Square Error(MSE) and structural similarity (SSIM), needs the original image as a reference. It's not applicable to the remote sensing image for which the original image cannot be assumed to be available. In this paper, a No-reference Image Quality Assessment (NRIQA) algorithm is presented to evaluate the quality of remote sensing image. Since blur and noise (including the stripe noise) are the common distortion factors affecting remote sensing image quality, a comprehensive evaluation factor is modeled to assess the blur and noise by analyzing the image visual properties for different incentives combined with SSIM based on human visual system (HVS), and also to assess the stripe noise by using Phase Congruency (PC). The experiment results show this algorithm is an accurate and reliable method for Remote Sensing Image Quality Assessment.

  17. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality. PMID:26478956

  18. Comparison of no-reference image quality assessment machine learning-based algorithms on compressed images

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Saadane, AbdelHakim; Fernandez-Maloigne, Christine

    2015-01-01

    No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications. The main goal of this paper is to perform a comparative study of seven well known no-reference learning-based image quality algorithms. To test the performance of these algorithms, three public databases are used. As a first step, the trial algorithms are compared when no new learning is performed. The second step investigates how the training set influences the results. The Spearman Rank Ordered Correlation Coefficient (SROCC) is utilized to measure and compare the performance. In addition, an hypothesis test is conducted to evaluate the statistical significance of performance of each tested algorithm.

  19. Evaluation of scatter effects on image quality for breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Wu, Gang; Mainprize, James G.; Boone, John M.; Yaffe, Martin J.

    2007-03-01

    Digital breast tomosynthesis uses a limited number of low-dose x-ray projections to produce a three-dimensional (3D) tomographic reconstruction of the breast. The purpose of this investigation was to characterize and evaluate the effect of scatter radiation on image quality for breast tomosynthesis. Generated by a Monte Carlo simulation method, scatter point spread functions (PSF) were convolved over the field of view (FOV) to estimate the distribution of scatter for each angle of tomosynthesis projection. The results demonstrated that in the absence of scatter reduction techniques, the scatter-to-primary ratio (SPR) levels for the average breast are quite high (~0.4 at the centre of mass), and increased with increased breast thickness and with larger FOV. Associated with such levels of x-ray scatter are cupping artifacts, as well as reduced accuracy in reconstruction values. The effect of x-ray scatter on the contrast, noise, and signal-difference-to-noise ratio (SDNR) in tomosynthesis reconstruction was measured as a function of tumour size. For example, the contrast in the reconstructed central slice of a tumour-like mass (14 mm in diameter) was degraded by 30% while the inaccuracy of the voxel value was 28%, and the reduction of SDNR was 60%. We have quantified the degree to which scatter degrades the image quality over a wide range of parameters, including x-ray beam energy, breast thickness, breast diameter, and breast composition. However, even without a scatter rejection device, the contrast and SDNR in the reconstructed tomosynthesis slice is higher than that of conventional mammographic projection images acquired with a grid at an equivalent total exposure.

  20. Digital Image Processing Applied To Quality Assurance In Mineral Industry

    NASA Astrophysics Data System (ADS)

    Hamrouni, Zouheir; Ayache, Alain; Krey, Charlie J.

    1989-03-01

    In this paper , we bring forward an application of vision in the domain of quality assurance in mineral industry of talc. By using image processing and computer vision means, the proposed real time whiteness captor system intends: - to inspect the whiteness of grinded product, - to manage the mixing of primary talcs before grinding, in order to obtain a final product with predetermined whiteness. The system uses the robotic CCD microcamera MICAM (designed by our laboratory and presently manufactured), a micro computer system based on Motorola 68020 and real time image processing boards. It has the industrial following specifications: - High reliability - Whiteness is determined with a 0.3% precision on a scale of 25 levels. Because of the expected precision, we had to study carefully the lighting system, the type of image captor and associated electronics. The first developped softwares are able to process the withness of talcum powder; then we have conceived original algorithms to control withness of rough talc taking into account texture and shadows. The processing times of these algorithms are completely compatible with industrial rates. This system can be applied to other domains where high precision reflectance captor is needed: industry of paper, paints, ...

  1. Beyond image quality: designing engaging interactions with digital products

    NASA Astrophysics Data System (ADS)

    de Ridder, Huib; Rozendaal, Marco C.

    2008-02-01

    Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed visually, but other criteria such as enjoyment, fun, engagement and hedonic quality are emerging. This paper deals with engagement, the intrinsically enjoyable readiness to put more effort into exploring and/or using a product than strictly required, thus attracting and keeping user's attention for a longer period of time. The impact of the experienced richness of an interface, both visually and degree of possible manipulations, was investigated in a series of experiments employing game-like user interfaces. This resulted in the extension of an existing conceptual framework relating engagement to richness by means of two intermediating variables, namely experienced challenge and sense of control. Predictions from this revised framework are evaluated against results of an earlier experiment assessing the ergonomic and hedonic qualities of interactive media. Test material consisted of interactive CD-ROM's containing presentations of three companies for future customers.

  2. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    PubMed Central

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-01-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  3. Task-based measures of image quality and their relation to radiation dose and patient risk

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  4. SENTINEL-2 image quality and level 1 processing

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Baillarin, Simon; Gascon, Ferran; Hillairet, Emmanuel; Dechoz, Cécile; Lacherade, Sophie; Martimort, Philippe; Spoto, François; Henry, Patrice; Duca, Riccardo

    2009-08-01

    In the framework of the Global Monitoring for Environment and Security (GMES) programme, the European Space Agency (ESA) in partnership with the European Commission (EC) is developing the SENTINEL-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a twin satellites configuration deployed in polar sun-synchronous orbit and is designed to offer a unique combination of systematic global coverage with a wide field of view (290km), a high revisit (5 days at equator with two satellites), a high spatial resolution (10m, 20m and 60 m) and multi-spectral imagery (13 bands in the visible and the short wave infrared spectrum). SENTINEL-2 will ensure data continuity of SPOT and LANDSAT multispectral sensors while accounting for future service evolution. This paper presents the main geometric and radiometric image quality requirements for the mission. The strong multi-spectral and multi-temporal registration requirements constrain the stability of the platform and the ground processing which will automatically refine the geometric physical model through correlation technics. The geolocation of the images will take benefits from a worldwide reference data set made of SENTINEL-2 data strips geolocated through a global space-triangulation. These processing are detailed through the description of the level 1C production which will provide users with ortho-images of Top of Atmosphere reflectances. The huge amount of data (1.4 Tbits per orbit) is also a challenge for the ground processing which will produce at level 1C all the acquired data. Finally we discuss the different geometric (line of sight, focal plane cartography, ...) and radiometric (relative and absolute camera sensitivity) in-flight calibration methods that will take advantage of the on-board sun diffuser and ground targets to answer the severe mission requirements.

  5. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases.

  6. NOTE: Development of a quality assurance protocol for peripheral subtraction imaging applications

    NASA Astrophysics Data System (ADS)

    Walsh, C.; Murphy, D.; O'Hare, N.

    2002-04-01

    Peripheral subtraction scanning is used to trace the blood vessels of upper and lower extremities. In some modern C-arm fluoroscopy systems this function is performed automatically. In this mode the system is programmed to advance and stop in a series of steps taking a mask image at each point. The system then repeats each step after the contrast agent has been injected, and produces a DSA image at each point. Current radiographic quality assurance protocols do not address this feature. This note reviews methods of measuring system vibration while images are being acquired in automated peripheral stepping. The effect on image quality pre- and post-image processing is assessed. Results show that peripheral stepping DSA does not provide the same degree of image quality as static DSA. In examining static test objects, the major cause of the reduction in image quality is misregistration due to vibration of the image intensifier during imaging.

  7. The image quality of ion computed tomography at clinical imaging dose levels

    SciTech Connect

    Hansen, David C.; Bassler, Niels; Sørensen, Thomas Sangild; Seco, Joao

    2014-11-01

    Purpose: Accurately predicting the range of radiotherapy ions in vivo is important for the precise delivery of dose in particle therapy. Range uncertainty is currently the single largest contribution to the dose margins used in planning and leads to a higher dose to normal tissue. The use of ion CT has been proposed as a method to improve the range uncertainty and thereby reduce dose to normal tissue of the patient. A wide variety of ions have been proposed and studied for this purpose, but no studies evaluate the image quality obtained with different ions in a consistent manner. However, imaging doses ion CT is a concern which may limit the obtainable image quality. In addition, the imaging doses reported have not been directly comparable with x-ray CT doses due to the different biological impacts of ion radiation. The purpose of this work is to develop a robust methodology for comparing the image quality of ion CT with respect to particle therapy, taking into account different reconstruction methods and ion species. Methods: A comparison of different ions and energies was made. Ion CT projections were simulated for five different scenarios: Protons at 230 and 330 MeV, helium ions at 230 MeV/u, and carbon ions at 430 MeV/u. Maps of the water equivalent stopping power were reconstructed using a weighted least squares method. The dose was evaluated via a quality factor weighted CT dose index called the CT dose equivalent index (CTDEI). Spatial resolution was measured by the modulation transfer function. This was done by a noise-robust fit to the edge spread function. Second, the image quality as a function of the number of scanning angles was evaluated for protons at 230 MeV. In the resolution study, the CTDEI was fixed to 10 mSv, similar to a typical x-ray CT scan. Finally, scans at a range of CTDEI’s were done, to evaluate dose influence on reconstruction error. Results: All ions yielded accurate stopping power estimates, none of which were statistically

  8. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    PubMed

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  9. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images

    PubMed Central

    Kim, Kwang-Min; Son, Kilho; Palmore, G. Tayhas R.

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  10. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images

    NASA Astrophysics Data System (ADS)

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory

    2011-10-01

    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  11. Quality Assurance of Ultrasound Imaging Systems for Target Localization and Online Setup Corrections

    SciTech Connect

    Tome, Wolfgang A. Orton, Nigel P.

    2008-05-01

    We describe quality assurance paradigms for ultrasound imaging systems for target localization (UISTL). To determine the absolute localization accuracy of a UISTL, an absolute coordinate system can be established in the treatment room and spherical targets at various depths can be localized. To test the ability of such a system to determine the magnitude of internal organ motion, a phantom that mimics the human male pelvic anatomy can be used to simulate different organ motion ranges. To assess the interuser variability of ultrasound (US) guidance, different experienced users can independently determine the daily organ shifts for the same patients for a number of consecutive fractions. The average accuracy for a UISTL for the localization of spherical targets at various depths has been found to be 0.57 {+-} 0.47 mm in each spatial dimension for various focal depths. For the phantom organ motion test it was found that the true organ motion could be determined to within 1.0 mm along each axis. The variability between different experienced users who localized the same 5 patients for five consecutive fractions was small in comparison to the indicated shifts. In addition to the quality assurance tests that address the ability of a UISTL to accurately localize a target, a thorough quality assurance program should also incorporate the following two aspects to ensure consistent and accurate localization in daily clinical use: (1) adequate training and performance monitoring of users of the US target localization system, and (2) prescreening of patients who may not be good candidates for US localization.

  12. Crowdsourcing quality control for Dark Energy Survey images

    DOE PAGES

    Melchior, P.

    2016-07-01

    We have developed a crowdsourcing web application for image quality controlemployed by the Dark Energy Survey. Dubbed the "DES exposure checker", itrenders science-grade images directly to a web browser and allows users to markproblematic features from a set of predefined classes. Users can also generatecustom labels and thus help identify previously unknown problem classes. Userreports are fed back to hardware and software experts to help mitigate andeliminate recognized issues. We report on the implementation of the applicationand our experience with its over 100 users, the majority of which areprofessional or prospective astronomers but not data management experts. Wediscuss aspects ofmore » user training and engagement, and demonstrate how problemreports have been pivotal to rapidly correct artifacts which would likely havebeen too subtle or infrequent to be recognized otherwise. We conclude with anumber of important lessons learned, suggest possible improvements, andrecommend this collective exploratory approach for future astronomical surveysor other extensive data sets with a sufficiently large user base. We alsorelease open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less

  13. Crowdsourcing quality control for Dark Energy Survey images

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Sheldon, E.; Drlica-Wagner, A.; Rykoff, E. S.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Jarvis, M.; Kuehn, K.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Wester, W.; Zhang, Y.

    2016-07-01

    We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo version at http://des-exp-checker.pmelchior.net.

  14. Imaging-based logics for ornamental stone quality chart definition

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Gargiulo, Aldo; Serranti, Silvia; Raspi, Costantino

    2007-02-01

    Ornamental stone products are commercially classified on the market according to several factors related both to intrinsic lythologic characteristics and to their visible pictorial attributes. Sometimes these latter aspects prevail in quality criteria definition and assessment. Pictorial attributes are in any case also influenced by the performed working actions and the utilized tools selected to realize the final stone manufactured product. Stone surface finishing is a critical task because it can contribute to enhance certain aesthetic features of the stone itself. The study was addressed to develop an innovative set of methodologies and techniques able to quantify the aesthetic quality level of stone products taking into account both the physical and the aesthetical characteristics of the stones. In particular, the degree of polishing of the stone surfaces and the presence of defects have been evaluated, applying digital image processing strategies. Morphological and color parameters have been extracted developing specific software architectures. Results showed as the proposed approaches allow to quantify the degree of polishing and to identify surface defects related to the intrinsic characteristics of the stone and/or the performed working actions.

  15. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  16. Novel Card Games for Learning Radiographic Image Quality and Urologic Imaging in Veterinary Medicine.

    PubMed

    Ober, Christopher P

    2016-01-01

    Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process. PMID:26966984

  17. Novel Card Games for Learning Radiographic Image Quality and Urologic Imaging in Veterinary Medicine.

    PubMed

    Ober, Christopher P

    2016-01-01

    Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process.

  18. Quality Imaging - Comparison of CR Mammography with Screen-Film Mammography

    SciTech Connect

    Gaona, E.; Azorin Nieto, J.; Iran Diaz Gongora, J. A.; Arreola, M.; Casian Castellanos, G.; Perdigon Castaneda, G. M.; Franco Enriquez, J. G.

    2006-09-08

    The aim of this work is a quality imaging comparison of CR mammography images printed to film by a laser printer with screen-film mammography. A Giotto and Elscintec dedicated mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in screen-film mammography. Four CR mammography units from two different manufacturers and three dedicated x-ray mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in CR mammography. The tests quality image included an assessment of system resolution, scoring phantom images, Artifacts, mean optical density and density difference (contrast). In this study, screen-film mammography with a quality control program offers a significantly greater level of quality image relative to CR mammography images printed on film.

  19. Asbestos/NESHAP adequately wet guidance

    SciTech Connect

    Shafer, R.; Throwe, S.; Salgado, O.; Garlow, C.; Hoerath, E.

    1990-12-01

    The Asbestos NESHAP requires facility owners and/or operators involved in demolition and renovation activities to control emissions of particulate asbestos to the outside air because no safe concentration of airborne asbestos has ever been established. The primary method used to control asbestos emissions is to adequately wet the Asbestos Containing Material (ACM) with a wetting agent prior to, during and after demolition/renovation activities. The purpose of the document is to provide guidance to asbestos inspectors and the regulated community on how to determine if friable ACM is adequately wet as required by the Asbestos NESHAP.

  20. Quantitative and qualitative image quality analysis of super resolution images from a low cost scanning laser ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Echegaray, Sebastian; Zamora, Gilberto; Soliz, Peter; Bauman, Wendall

    2011-03-01

    The lurking epidemic of eye diseases caused by diabetes and aging will put more than 130 million Americans at risk of blindness by 2020. Screening has been touted as a means to prevent blindness by identifying those individuals at risk. However, the cost of most of today's commercial retinal imaging devices makes their use economically impractical for mass screening. Thus, low cost devices are needed. With these devices, low cost often comes at the expense of image quality with high levels of noise and distortion hindering the clinical evaluation of those retinas. A software-based super resolution (SR) reconstruction methodology that produces images with improved resolution and quality from multiple low resolution (LR) observations is introduced. The LR images are taken with a low-cost Scanning Laser Ophthalmoscope (SLO). The non-redundant information of these LR images is combined to produce a single image in an implementation that also removes noise and imaging distortions while preserving fine blood vessels and small lesions. The feasibility of using the resulting SR images for screening of eye diseases was tested using quantitative and qualitative assessments. Qualitatively, expert image readers evaluated their ability of detecting clinically significant features on the SR images and compared their findings with those obtained from matching images of the same eyes taken with commercially available high-end cameras. Quantitatively, measures of image quality were calculated from SR images and compared to subject-matched images from a commercial fundus imager. Our results show that the SR images have indeed enough quality and spatial detail for screening purposes.

  1. Supervision of Student Teachers: How Adequate?

    ERIC Educational Resources Information Center

    Dean, Ken

    This study attempted to ascertain how adequately student teachers are supervised by college supervisors and supervising teachers. Questions to be answered were as follows: a) How do student teachers rate the adequacy of supervision given them by college supervisors and supervising teachers? and b) Are there significant differences between ratings…

  2. Small Rural Schools CAN Have Adequate Curriculums.

    ERIC Educational Resources Information Center

    Loustaunau, Martha

    The small rural school's foremost and largest problem is providing an adequate curriculum for students in a changing world. Often the small district cannot or is not willing to pay the per-pupil cost of curriculum specialists, specialized courses using expensive equipment no more than one period a day, and remodeled rooms to accommodate new…

  3. An Adequate Education Defined. Fastback 476.

    ERIC Educational Resources Information Center

    Thomas, M. Donald; Davis, E. E. (Gene)

    Court decisions historically have dealt with educational equity; now they are helping to establish "adequacy" as a standard in education. Legislatures, however, have been slow to enact remedies. One debate over education adequacy, though, is settled: Schools are not financed at an adequate level. This fastback is divided into three sections.…

  4. Funding the Formula Adequately in Oklahoma

    ERIC Educational Resources Information Center

    Hancock, Kenneth

    2015-01-01

    This report is a longevity, simulational study that looks at how the ratio of state support to local support effects the number of school districts that breaks the common school's funding formula which in turns effects the equity of distribution to the common schools. After nearly two decades of adequately supporting the funding formula, Oklahoma…

  5. Fast T2-weighted MR imaging: impact of variation in pulse sequence parameters on image quality and artifacts.

    PubMed

    Li, Tao; Mirowitz, Scott A

    2003-09-01

    The purpose of this study was to quantitatively evaluate in a phantom model the practical impact of alteration of key imaging parameters on image quality and artifacts for the most commonly used fast T(2)-weighted MR sequences. These include fast spin-echo (FSE), single shot fast spin-echo (SSFSE), and spin-echo echo-planar imaging (EPI) pulse sequences. We developed a composite phantom with different T1 and T2 values, which was evaluated while stationary as well as during periodic motion. Experiments involved controlled variations in key parameters including effective TE, TR, echo spacing (ESP), receive bandwidth (BW), echo train length (ETL), and shot number (SN). Quantitative analysis consisted of signal-to-noise ratio (SNR), image nonuniformity, full-width-at-half-maximum (i.e., blurring or geometric distortion) and ghosting ratio. Among the fast T(2)-weighted sequences, EPI was most sensitive to alterations in imaging parameters. Among imaging parameters that we tested, effective TE, ETL, and shot number most prominently affected image quality and artifacts. Short T(2) objects were more sensitive to alterations in imaging parameters in terms of image quality and artifacts. Optimal clinical application of these fast T(2)-weighted imaging pulse sequences requires careful attention to selection of imaging parameters.

  6. Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images

    NASA Astrophysics Data System (ADS)

    Deng, Yuanbo; Chu, Daping

    2016-03-01

    The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.

  7. High-quality remote interactive imaging in the operating theatre

    NASA Astrophysics Data System (ADS)

    Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan

    2009-02-01

    We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.

  8. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.

    PubMed

    Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C

    2014-02-01

    It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm. PMID:26270911

  9. Lesion insertion in projection domain for computed tomography image quality assessment

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Ma, Chi; Yu, Zhicong; Leng, Shuai; Yu, Lifeng; McCollough, Cynthia

    2015-03-01

    To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way to achieve this objective is to create hybrid images that combine patient images with simulated lesions. Because conventional hybrid images generated in the image-domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Liver lesion models were forward projected according to the geometry of a commercial CT scanner to acquire lesion projections. The lesion projections were then inserted into patient projections (decoded from commercial CT raw data with the assistance of the vendor) and reconstructed to acquire hybrid images. To validate the accuracy of the forward projection geometry, simulated images reconstructed from the forward projections of a digital ACR phantom were compared to physically acquired ACR phantom images. To validate the hybrid images, lesion models were inserted into patient images and visually assessed. Results showed that the simulated phantom images and the physically acquired phantom images had great similarity in terms of HU accuracy and high-contrast resolution. The lesions in the hybrid image had a realistic appearance and merged naturally into the liver background. In addition, the inserted lesion demonstrated reconstruction-parameter-dependent appearance. Compared to conventional image-domain approach, our method enables more realistic hybrid images for image quality assessment.

  10. Evaluation of image quality of MRI data for brain tumor surgery

    NASA Astrophysics Data System (ADS)

    Heckel, Frank; Arlt, Felix; Geisler, Benjamin; Zidowitz, Stephan; Neumuth, Thomas

    2016-03-01

    3D medical images are important components of modern medicine. Their usefulness for the physician depends on their quality, though. Only high-quality images allow accurate and reproducible diagnosis and appropriate support during treatment. We have analyzed 202 MRI images for brain tumor surgery in a retrospective study. Both an experienced neurosurgeon and an experienced neuroradiologist rated each available image with respect to its role in the clinical workflow, its suitability for this specific role, various image quality characteristics, and imaging artifacts. Our results show that MRI data acquired for brain tumor surgery does not always fulfill the required quality standards and that there is a significant disagreement between the surgeon and the radiologist, with the surgeon being more critical. Noise, resolution, as well as the coverage of anatomical structures were the most important criteria for the surgeon, while the radiologist was mainly disturbed by motion artifacts.

  11. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  12. Digital mammography--DQE versus optimized image quality in clinical environment: an on site study

    NASA Astrophysics Data System (ADS)

    Oberhofer, Nadia; Fracchetti, Alessandro; Springeth, Margareth; Moroder, Ehrenfried

    2010-04-01

    The intrinsic quality of the detection system of 7 different digital mammography units (5 direct radiography DR; 2 computed radiography CR), expressed by DQE, has been compared with their image quality/dose performances in clinical use. DQE measurements followed IEC 62220-1-2 using a tungsten test object for MTF determination. For image quality assessment two different methods have been applied: 1) measurement of contrast to noise ratio (CNR) according to the European guidelines and 2) contrast-detail (CD) evaluation. The latter was carried out with the phantom CDMAM ver. 3.4 and the commercial software CDMAM Analyser ver. 1.1 (both Artinis) for automated image analysis. The overall image quality index IQFinv proposed by the software has been validated. Correspondence between the two methods has been shown figuring out a linear correlation between CNR and IQFinv. All systems were optimized with respect to image quality and average glandular dose (AGD) within the constraints of automatic exposure control (AEC). For each equipment, a good image quality level was defined by means of CD analysis, and the corresponding CNR value considered as target value. The goal was to achieve for different PMMA-phantom thicknesses constant image quality, that means the CNR target value, at minimum dose. All DR systems exhibited higher DQE and significantly better image quality compared to CR systems. Generally switching, where available, to a target/filter combination with an x-ray spectrum of higher mean energy permitted dose savings at equal image quality. However, several systems did not allow to modify the AEC in order to apply optimal radiographic technique in clinical use. The best ratio image quality/dose was achieved by a unit with a-Se detector and W anode only recently available on the market.

  13. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety

    PubMed Central

    Huang, Hui; Liu, Li; Ngadi, Michael O.

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  14. Quantitative measurement of holographic image quality using Adobe Photoshop

    NASA Astrophysics Data System (ADS)

    Wesly, E.

    2013-02-01

    Measurement of the characteristics of image holograms in regards to diffraction efficiency and signal to noise ratio are demonstrated, using readily available digital cameras and image editing software. Illustrations and case studies, using currently available holographic recording materials, are presented.

  15. How do we watch images? A case of change detection and quality estimation

    NASA Astrophysics Data System (ADS)

    Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte

    2012-01-01

    The most common tasks in subjective image estimation are change detection (a detection task) and image quality estimation (a preference task). We examined how the task influences the gaze behavior when comparing detection and preference tasks. The eye movements of 16 naïve observers were recorded with 8 observers in both tasks. The setting was a flicker paradigm, where the observers see a non-manipulated image, a manipulated version of the image and again the non-manipulated image and estimate the difference they perceived in them. The material was photographic material with different image distortions and contents. To examine the spatial distribution of fixations, we defined the regions of interest using a memory task and calculated information entropy to estimate how concentrated the fixations were on the image plane. The quality task was faster and needed fewer fixations and the first eight fixations were more concentrated on certain image areas than the change detection task. The bottom-up influences of the image also caused more variation to the gaze behavior in the quality estimation task than in the change detection task The results show that the quality estimation is faster and the regions of interest are emphasized more on certain images compared with the change detection task that is a scan task where the whole image is always thoroughly examined. In conclusion, in subjective image estimation studies it is important to think about the task.

  16. Non-reference quality assessment of infrared images reconstructed by compressive sensing

    NASA Astrophysics Data System (ADS)

    Ospina-Borras, J. E.; Benitez-Restrepo, H. D.

    2015-01-01

    Infrared (IR) images are representations of the world and have natural features like images in the visible spectrum. As such, natural features from infrared images support image quality assessment (IQA).1 In this work, we compare the quality of a set of indoor and outdoor IR images reconstructed from measurement functions formed by linear combination of their pixels. The reconstruction methods are: linear discrete cosine transform (DCT) acquisition, DCT augmented with total variation minimization, and compressive sensing scheme. Peak Signal to Noise Ratio (PSNR), three full-reference (FR), and four no-reference (NR) IQA measures compute the qualities of each reconstruction: multi-scale structural similarity (MSSIM), visual information fidelity (VIF), information fidelity criterion (IFC), sharpness identification based on local phase coherence (LPC-SI), blind/referenceless image spatial quality evaluator (BRISQUE), naturalness image quality evaluator (NIQE) and gradient singular value decomposition (GSVD), respectively. Each measure is compared to human scores that were obtained by differential mean opinion score (DMOS) test. We observe that GSVD has the highest correlation coefficients of all NR measures, but all FR have better performance. We use MSSIM to compare the reconstruction methods and we find that CS scheme produces a good-quality IR image, using only 30000 random sub-samples and 1000 DCT coefficients (2%). In contrast, linear DCT provides higher correlation coefficients than CS scheme by using all the pixels of the image and 31000 DCT (47%) coefficients.

  17. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    PubMed

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  18. An electron beam imaging system for quality assurance in IORT

    NASA Astrophysics Data System (ADS)

    Casali, F.; Rossi, M.; Morigi, M. P.; Brancaccio, R.; Paltrinieri, E.; Bettuzzi, M.; Romani, D.; Ciocca, M.; Tosi, G.; Ronsivalle, C.; Vignati, M.

    2004-01-01

    Intraoperative radiation therapy is a special radiotherapy technique, which enables a high dose of radiation to be given in a single fraction during oncological surgery. The major stumbling block to the large-scale application of the technique is the transfer of the patient, with an open wound, from the operating room to the radiation therapy bunker, with the consequent organisational problems and the increased risk of infection. To overcome these limitations, in the last few years a new kind of linear accelerator, the Novac 7, conceived for direct use in the surgical room, has become available. Novac 7 can deliver electron beams of different energies (3, 5, 7 and 9 MeV), with a high dose rate (up to 20 Gy/min). The aim of this work, funded by ENEA in the framework of a research contract, is the development of an innovative system for on-line measurements of 2D dose distributions and electron beam characterisation, before radiotherapy treatment with Novac 7. The system is made up of the following components: (a) an electron-light converter; (b) a 14 bit cooled CCD camera; (c) a personal computer with an ad hoc written software for image acquisition and processing. The performances of the prototype have been characterised experimentally with different electron-light converters. Several tests have concerned the assessment of the detector response as a function of impulse number and electron beam energy. Finally, the experimental results concerning beam profiles have been compared with data acquired with other dosimetric techniques. The achieved results make it possible to say that the developed system is suitable for fast quality assurance measurements and verification of 2D dose distributions.

  19. No-Reference Image Quality Assessment for ZY3 Imagery in Urban Areas Using Statistical Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Cui, W. H.; Yang, F.; Wu, Z. C.

    2016-06-01

    More and more high-spatial resolution satellite images are produced with the improvement of satellite technology. However, the quality of images is not always satisfactory for application. Due to the impact of complicated atmospheric conditions and complex radiation transmission process in imaging process the images often suffer deterioration. In order to assess the quality of remote sensing images over urban areas, we proposed a general purpose image quality assessment methods based on feature extraction and machine learning. We use two types of features in multi scales. One is from the shape of histogram the other is from the natural scene statistics based on Generalized Gaussian distribution (GGD). A 20-D feature vector for each scale is extracted and is assumed to capture the RS image quality degradation characteristics. We use SVM to learn to predict image quality scores from these features. In order to do the evaluation, we construct a median scale dataset for training and testing with subjects taking part in to give the human opinions of degraded images. We use ZY3 satellite images over Wuhan area (a city in China) to conduct experiments. Experimental results show the correlation of the predicted scores and the subjective perceptions.

  20. Quality evaluation of adaptive optical image based on DCT and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Li, Junwei; Wang, Jing; Deng, Rong; Dong, Yanbing

    2015-04-01

    The adaptive optical telescopes play a more and more important role in the detection system on the ground, and the adaptive optical images are so many that we need find a suitable method of quality evaluation to choose good quality images automatically in order to save human power. It is well known that the adaptive optical images are no-reference images. In this paper, a new logarithmic evaluation method based on the use of the discrete cosine transform(DCT) and Rényi entropy for the adaptive optical images is proposed. Through the DCT using one or two dimension window, the statistical property of Rényi entropy for images is studied. The different directional Rényi entropy maps of an input image containing different information content are obtained. The mean values of different directional Rényi entropy maps are calculated. For image quality evaluation, the different directional Rényi entropy and its standard deviation corresponding to region of interest is selected as an indicator for the anisotropy of the images. The standard deviation of different directional Rényi entropy is obtained as the quality evaluation value for adaptive optical image. Experimental results show the proposed method that the sorting quality matches well with the visual inspection.

  1. SU-E-I-43: Pediatric CT Dose and Image Quality Optimization

    SciTech Connect

    Stevens, G; Singh, R

    2014-06-01

    Purpose: To design an approach to optimize radiation dose and image quality for pediatric CT imaging, and to evaluate expected performance. Methods: A methodology was designed to quantify relative image quality as a function of CT image acquisition parameters. Image contrast and image noise were used to indicate expected conspicuity of objects, and a wide-cone system was used to minimize scan time for motion avoidance. A decision framework was designed to select acquisition parameters as a weighted combination of image quality and dose. Phantom tests were used to acquire images at multiple techniques to demonstrate expected contrast, noise and dose. Anthropomorphic phantoms with contrast inserts were imaged on a 160mm CT system with tube voltage capabilities as low as 70kVp. Previously acquired clinical images were used in conjunction with simulation tools to emulate images at different tube voltages and currents to assess human observer preferences. Results: Examination of image contrast, noise, dose and tube/generator capabilities indicates a clinical task and object-size dependent optimization. Phantom experiments confirm that system modeling can be used to achieve the desired image quality and noise performance. Observer studies indicate that clinical utilization of this optimization requires a modified approach to achieve the desired performance. Conclusion: This work indicates the potential to optimize radiation dose and image quality for pediatric CT imaging. In addition, the methodology can be used in an automated parameter selection feature that can suggest techniques given a limited number of user inputs. G Stevens and R Singh are employees of GE Healthcare.

  2. An image-based technique to assess the perceptual quality of clinical chest radiographs

    SciTech Connect

    Lin Yuan; Luo Hui; Dobbins, James T. III; Page McAdams, H.; Wang, Xiaohui; Sehnert, William J.; Barski, Lori; Foos, David H.; Samei, Ehsan

    2012-11-15

    Purpose: Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of factors such as the capture system modulation transfer function, noise power spectrum, detective quantum efficiency, and the exposure technique. While these elements form the basic underlying components of image quality, when assessing a clinical image, radiologists seldom refer to these factors, but rather examine several specific regions of the displayed patient images, further impacted by a particular image processing method applied, to see whether the image is suitable for diagnosis. In this paper, the authors developed a novel strategy to simulate radiologists' perceptual evaluation process on actual clinical chest images. Methods: Ten regional based perceptual attributes of chest radiographs were determined through an observer study. Those included lung grey level, lung detail, lung noise, rib-lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. Each attribute was characterized in terms of a physical quantity measured from the image algorithmically using an automated process. A pilot observer study was performed on 333 digital chest radiographs, which included 179 PA images with 10:1 ratio grids (set 1) and 154 AP images without grids (set 2), to ascertain the correlation between image perceptual attributes and physical quantitative measurements. To determine the acceptable range of each perceptual attribute, a preliminary quality consistency range was defined based on the preferred 80% of images in set 1. Mean value difference ({mu}{sub 1}-{mu}{sub 2}) and variance ratio ({sigma}{sub 1}{sup 2}/{sigma}{sub 2}{sup 2}) were investigated to further quantify the differences between the selected two image sets. Results: The pilot observer study demonstrated that our regional based physical quantity metrics of chest radiographs correlated very well with

  3. Quality Assurance Needs for Modern Image-Based Radiotherapy: Recommendations From 2007 Interorganizational Symposium on 'Quality Assurance of Radiation Therapy: Challenges of Advanced Technology'

    SciTech Connect

    Williamson, Jeffrey F. Dunscombe, Peter B.; Sharpe, Michael B.; Thomadsen, Bruce R.; Purdy, James A.; Deye, James A.

    2008-05-01

    This report summarizes the consensus findings and recommendations emerging from 2007 Symposium, 'Quality Assurance of Radiation Therapy: Challenges of Advanced Technology.' The Symposium was held in Dallas February 20-22, 2007. The 3-day program, which was sponsored jointly by the American Society for Therapeutic Radiology and Oncology (ASTRO), American Association of Physicists in Medicine (AAPM), and National Cancer Institute (NCI), included >40 invited speakers from the radiation oncology and industrial engineering/human factor communities and attracted nearly 350 attendees, mostly medical physicists. A summary of the major findings follows. The current process of developing consensus recommendations for prescriptive quality assurance (QA) tests remains valid for many of the devices and software systems used in modern radiotherapy (RT), although for some technologies, QA guidance is incomplete or out of date. The current approach to QA does not seem feasible for image-based planning, image-guided therapies, or computer-controlled therapy. In these areas, additional scientific investigation and innovative approaches are needed to manage risk and mitigate errors, including a better balance between mitigating the risk of catastrophic error and maintaining treatment quality, complimenting the current device-centered QA perspective by a more process-centered approach, and broadening community participation in QA guidance formulation and implementation. Industrial engineers and human factor experts can make significant contributions toward advancing a broader, more process-oriented, risk-based formulation of RT QA. Healthcare administrators need to appropriately increase personnel and ancillary equipment resources, as well as capital resources, when new advanced technology RT modalities are implemented. The pace of formalizing clinical physics training must rapidly increase to provide an adequately trained physics workforce for advanced technology RT. The specific

  4. Local homogeneity combined with DCT statistics to blind noisy image quality assessment

    NASA Astrophysics Data System (ADS)

    Yang, Lingxian; Chen, Li; Chen, Heping

    2015-03-01

    In this paper a novel method for blind noisy image quality assessment is proposed. First, it is believed that human visual system (HVS) is more sensitive to the local smoothness area in a noise image, an adaptively local homogeneous block selection algorithm is proposed to construct a new homogeneous image named as homogeneity blocks (HB) based on computing each pixel characteristic. Second, applying the discrete cosine transform (DCT) for each HB and using high frequency component to evaluate image noise level. Finally, a modified peak signal to noise ratio (MPSNR) image quality assessment approach is proposed based on analysis DCT kurtosis distributions change and noise level above-mentioned. Simulations show that the quality scores that produced from the proposed algorithm are well correlated with the human perception of quality and also have a stability performance.

  5. INCITS W1.1 development update: appearance-based image quality standards for printers

    NASA Astrophysics Data System (ADS)

    Zeise, Eric K.; Rasmussen, D. René; Ng, Yee S.; Dalal, Edul; McCarthy, Ann; Williams, Don

    2008-01-01

    In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard. (1),(2) The resulting W1.1 project is based on a proposal (3) that perceived image quality can be described by a small set of broad-based attributes. There are currently six ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress of the teams addressing the attributes of Macro-Uniformity, Colour Rendition, Gloss & Gloss Uniformity, Text & Line Quality and Effective Resolution.

  6. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE PAGES

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    2016-02-28

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  7. Factors affecting computed tomography image quality for assessment of mechanical aortic valves.

    PubMed

    Suh, Young Joo; Kim, Young Jin; Hong, Yoo Jin; Lee, Hye-Jeong; Hur, Jin; Hong, Sae Rom; Im, Dong Jin; Kim, Yun Jung; Choi, Byoung Wook

    2016-06-01

    Evaluating mechanical valves with computed tomography (CT) can be problematic because artifacts from the metallic components of valves can hamper image quality. The purpose of this study was to determine factors affecting the image quality of cardiac CT to improve assessment of mechanical aortic valves. A total of 144 patients who underwent aortic valve replacement with mechanical valves (ten different types) and who underwent cardiac CT were included. Using a four-point grading system, the image quality of the CT scans was assessed for visibility of the valve leaflets and the subvalvular regions. Data regarding the type of mechanical valve, tube voltage, average heart rate (HR), and HR variability during CT scanning were compared between the non-diagnostic (overall image quality score ≤2) and diagnostic (overall image quality score >2) image quality groups. Logistic regression analyses were performed to identify predictors of non-diagnostic image quality. The percentage of valve types that incorporated a cobalt-chrome component (two types in total) and HR variability were significantly higher in the non-diagnostic image group than in the diagnostic group (P < 0.001 and P = 0.013, respectively). The average HR and tube voltage were not significantly different between the two groups (P > 0.05). Valve type was the only independent predictor of non-diagnostic quality. The CT image quality for patients with mechanical aortic valves differed significantly depending on the type of mechanical valve used and on the degree of HR variability.

  8. MTF as a quality measure for compressed images transmitted over computer networks

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Stern, Adrian; Huber, Merav; Huber, Revital

    1999-12-01

    One result of the recent advances in different components of imaging systems technology is that, these systems have become more resolution-limited and less noise-limited. The most useful tool utilized in characterization of resolution- limited systems is the Modulation Transfer Function (MTF). The goal of this work is to use the MTF as an image quality measure of image compression implemented by the JPEG (Joint Photographic Expert Group) algorithm and transmitted MPEG (Motion Picture Expert Group) compressed video stream through a lossy packet network. Although we realize that the MTF is not an ideal parameter with which to measure image quality after compression and transmission due to the non- linearity shift invariant process, we examine the conditions under which it can be used as an approximated criterion for image quality. The advantage in using the MTF of the compression algorithm is that it can be easily combined with the overall MTF of the imaging system.

  9. The study on the image quality of varied line spacing plane grating by computer simulation

    NASA Astrophysics Data System (ADS)

    Sun, Shouqiang; Zhang, Weiping; Liu, Lei; Yang, Qingyi

    2014-11-01

    Varied line spacing plane gratings have the features of self-focusing , aberration-reduced and easy manufacturing ,which are widely applied in synchrotron radiation, plasma physics and space astronomy, and other fields. In the study of diffracting imaging , the optical path function is expanded into maclaurin series, aberrations are expressed by the coefficient of series, most of the aberration coefficients are similar and the category is more, can't directly reflects image quality in whole. The paper will study on diffraction imaging of the varied line spacing plane gratings by using computer simulation technology, for a method judging the image quality visibly. In this paper, light beam from some object points on the same object plane are analyzed and simulated by ray trace method , the evaluation function is set up, which can fully scale the image quality. In addition, based on the evaluation function, the best image plane is found by search algorithm .

  10. Near-infrared hyperspectral imaging for quality analysis of agricultural and food products

    NASA Astrophysics Data System (ADS)

    Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.

    2010-04-01

    Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.

  11. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  12. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting

    PubMed Central

    Yang, Jiachen; Lin, Yancong; Gao, Zhiqun; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The human visual system (HVS) plays an important role in stereo image quality perception. Therefore, it has aroused many people’s interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF) and weighted multi-scale (MS-SSIM). Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels. PMID:26717412

  13. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting.

    PubMed

    Yang, Jiachen; Lin, Yancong; Gao, Zhiqun; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The human visual system (HVS) plays an important role in stereo image quality perception. Therefore, it has aroused many people's interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF) and weighted multi-scale (MS-SSIM). Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels. PMID:26717412

  14. Application of wavelets to the evaluation of phantom images for mammography quality control.

    PubMed

    Alvarez, M; Pina, D R; Miranda, J R A; Duarte, S B

    2012-11-01

    The main goal of this work was to develop a methodology for the computed analysis of American College of Radiology (ACR) mammographic phantom images, to be used in a quality control (QC) program of mammographic services. Discrete wavelet transform processing was applied to enhance the quality of images from the ACR mammographic phantom and to allow a lower dose for automatic evaluations of equipment performance in a QC program. Regions of interest (ROIs) containing phantom test objects (e.g., masses, fibers and specks) were focalized for appropriate wavelet processing, which highlighted the characteristics of structures present in each ROI. To minimize false-positive detection, each ROI in the image was submitted to pattern recognition tests, which identified structural details of the focalized test objects. Geometric and morphologic parameters of the processed test object images were used to quantify the final level of image quality. The final purpose of this work was to establish the main computational procedures for algorithms of quality evaluation of ACR phantom images. These procedures were implemented, and satisfactory agreement was obtained when the algorithm scores for image quality were compared with the results of assessments by three experienced radiologists. An exploratory study of a potential dose reduction was performed based on the radiologist scores and on the algorithm evaluation of images treated by wavelet processing. The results were comparable with both methods, although the algorithm had a tendency to provide a lower dose reduction than the evaluation by observers. Nevertheless, the objective and more precise criteria used by the algorithm to score image quality gave the computational result a higher degree of confidence. The developed algorithm demonstrates the potential use of the wavelet image processing approach for objectively evaluating the mammographic image quality level in routine QC tests. The implemented computational procedures

  15. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  16. Scientific assessment of the quality of OSIRIS images

    NASA Astrophysics Data System (ADS)

    Tubiana, C.; Güttler, C.; Kovacs, G.; Bertini, I.; Bodewits, D.; Fornasier, S.; Lara, L.; La Forgia, F.; Magrin, S.; Pajola, M.; Sierks, H.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Agarwal, J.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Boudreault, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; De Cecco, M.; El-Maarry, M. R.; Fulle, M.; Groussin, O.; Gutiérrez-Marques, P.; Gutiérrez, P. J.; Hoekzema, N.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lazzarin, M.; Lopez Moreno, J. J.; Marzari, F.; Massironi, M.; Michalik, H.; Moissl, R.; Naletto, G.; Oklay, N.; Scholten, F.; Shi, X.; Thomas, N.; Vincent, J.-B.

    2015-11-01

    Context. OSIRIS, the scientific imaging system onboard the ESA Rosetta spacecraft, has been imaging the nucleus of comet 67P/Churyumov-Gerasimenko and its dust and gas environment since March 2014. The images serve different scientific goals, from morphology and composition studies of the nucleus surface, to the motion and trajectories of dust grains, the general structure of the dust coma, the morphology and intensity of jets, gas distribution, mass loss, and dust and gas production rates. Aims: We present the calibration of the raw images taken by OSIRIS and address the accuracy that we can expect in our scientific results based on the accuracy of the calibration steps that we have performed. Methods: We describe the pipeline that has been developed to automatically calibrate the OSIRIS images. Through a series of steps, radiometrically calibrated and distortion corrected images are produced and can be used for scientific studies. Calibration campaigns were run on the ground before launch and throughout the years in flight to determine the parameters that are used to calibrate the images and to verify their evolution with time. We describe how these parameters were determined and we address their accuracy. Results: We provide a guideline to the level of trust that can be put into the various studies performed with OSIRIS images, based on the accuracy of the image calibration.

  17. Image Quality and Radiation Dose for Prospectively Triggered Coronary CT Angiography: 128-Slice Single-Source CT versus First-Generation 64-Slice Dual-Source CT

    PubMed Central

    Gu, Jin; Shi, He-shui; Han, Ping; Yu, Jie; Ma, Gui-na; Wu, Sheng

    2016-01-01

    This study sought to compare the image quality and radiation dose of coronary computed tomography angiography (CCTA) from prospectively triggered 128-slice CT (128-MSCT) versus dual-source 64-slice CT (DSCT). The study was approved by the Medical Ethics Committee at Tongji Medical College of Huazhong University of Science and Technology. Eighty consecutive patients with stable heart rates lower than 70 bpm were enrolled. Forty patients were scanned with 128-MSCT, and the other 40 patients were scanned with DSCT. Two radiologists independently assessed the image quality in segments (diameter >1 mm) according to a three-point scale (1: excellent; 2: moderate; 3: insufficient). The CCTA radiation dose was calculated. Eighty patients with 526 segments in the 128-MSCT group and 544 segments in the DSCT group were evaluated. The image quality 1, 2 and 3 scores were 91.6%, 6.9% and 1.5%, respectively, for the 128-MSCT group and 97.6%, 1.7% and 0.7%, respectively, for the DSCT group, and there was a statistically significant inter-group difference (P ≤ 0.001). The effective doses were 3.0 mSv in the 128-MSCT group and 4.5 mSv in the DSCT group (P ≤ 0.001). Compared with DSCT, CCTA with prospectively triggered 128-MSCT had adequate image quality and a 33.3% lower radiation dose. PMID:27752040

  18. Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs)

    NASA Astrophysics Data System (ADS)

    McGarry, C. K.; Grattan, M. W. D.; Cosgrove, V. P.

    2007-12-01

    This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min-1 and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at dmax in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min-1 low-dose acquisition mode. Routine EPID QC contrast tolerance (±10) and action (±20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced

  19. Comparison of retinal image quality with spherical and customized aspheric intraocular lenses

    PubMed Central

    Guo, Huanqing; Goncharov, Alexander V.; Dainty, Chris

    2012-01-01

    We hypothesize that an intraocular lens (IOL) with higher-order aspheric surfaces customized for an individual eye provides improved retinal image quality, despite the misalignments that accompany cataract surgery. To test this hypothesis, ray-tracing eye models were used to investigate 10 designs of mono-focal single lens IOLs with rotationally symmetric spherical, aspheric, and customized surfaces. Retinal image quality of pseudo-phakic eyes using these IOLs together with individual variations in ocular and IOL parameters, are evaluated using a Monte Carlo analysis. We conclude that customized lenses should give improved retinal image quality despite the random errors resulting from IOL insertion. PMID:22574257

  20. Testing the quality of images for permanent magnet desktop MRI systems using specially designed phantoms

    NASA Astrophysics Data System (ADS)

    Qiu, Jianfeng; Wang, Guozhu; Min, Jiao; Wang, Xiaoyan; Wang, Pengcheng

    2013-12-01

    Our aim was to measure the performance of desktop magnetic resonance imaging (MRI) systems using specially designed phantoms, by testing imaging parameters and analysing the imaging quality. We designed multifunction phantoms with diameters of 18 and 60 mm for desktop MRI scanners in accordance with the American Association of Physicists in Medicine (AAPM) report no. 28. We scanned the phantoms with three permanent magnet 0.5 T desktop MRI systems, measured the MRI image parameters, and analysed imaging quality by comparing the data with the AAPM criteria and Chinese national standards. Image parameters included: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, signal-to-noise ratio (SNR), and image uniformity. The image parameters of three desktop MRI machines could be measured using our specially designed phantoms, and most parameters were in line with MRI quality control criterion, including: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, image uniformity and slice position accuracy. However, SNR was significantly lower than in some references. The imaging test and quality control are necessary for desktop MRI systems, and should be performed with the applicable phantom and corresponding standards.

  1. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    SciTech Connect

    Morrow, Natalya V.; Lawton, Colleen A.; Qi, X. Sharon; Li, X. Allen

    2012-04-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  2. Differential gloss quality scale experiment update: an appearance-based image quality standard initiative (INCITS W1.1)

    NASA Astrophysics Data System (ADS)

    Ng, Yee S.; Kuo, Chunghui; Maggard, Eric; Mashtare, Dale; Morris, Peter; Farnand, Susan

    2007-01-01

    Surface characteristics of a printed sample command a parallel group of visual attributes determining perceived image quality beyond color, and they manifest themselves through various perceived gloss features such as differential gloss, gloss granularity, gloss mottle, etc. Extending from the scope of ISO19799 with limited range of gloss level and printing technologies, the objective of this study is to derive an appearance-based differential gloss quality scale ranging from very low gloss level to very high gloss level composed by various printing technology/substrate combinations. Three psychophysical experiment procedures were proposed including the quality ruler method, pair comparison, and interval scaling with two anchor stimuli, where the pair comparison process was subsequently dropped because of the concern of experiment complexity and data consistency after preliminary trial study. In this paper, we will compare the obtained average quality scale after mapping to the sharpness quality ruler with the average perceived differential gloss via the interval scale. Our numerical analysis indicates a general inverse relationship between the perceived image quality and the gloss variation on an image.

  3. Body image and college women's quality of life: The importance of being self-compassionate.

    PubMed

    Duarte, Cristiana; Ferreira, Cláudia; Trindade, Inês A; Pinto-Gouveia, José

    2015-06-01

    This study explored self-compassion as a mediator between body dissatisfaction, social comparison based on body image and quality of life in 662 female college students. Path analysis revealed that while controlling for body mass index, self-compassion mediated the impact of body dissatisfaction and unfavourable social comparisons on psychological quality of life. The path model accounted for 33 per cent of psychological quality of life variance. Findings highlight the importance of self-compassion as a mechanism that may operate on the association between negative body image evaluations and young women's quality of life.

  4. Application of image quality metamerism to investigate gold color area in cultural property

    NASA Astrophysics Data System (ADS)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2013-01-01

    A concept of image quality metamerism as an expansion of conventional metamerism defined in color science is introduced, and it is applied to segment similar color areas in a cultural property. The image quality metamerism can unify different image quality attributes based on an index showing the degree of image quality metamerism proposed. As a basic research step, the index is consisted of color and texture information and examined to investigate a cultural property. The property investigated is a pair of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold-colored areas painted by using high granularity colorants compared with other color areas are evaluated based on the image quality metamerism index locally, then the index is visualized as a map showing the possibility of the image quality metamer to the reference pixel set in the same image. This visualization means a segmentation of areas where color is similar but texture is different. The experimental result showed that the proposed method was effective to show areas of gold color areas in the property.

  5. ANALYZING WATER QUALITY WITH IMAGES ACQUIRED FROM AIRBORNE SENSORS

    EPA Science Inventory

    Monitoring different parameters of water quality can be a time consuming and expensive activity. However, the use of airborne light-sensitive (optical) instruments may enhance the abilities of resource managers to monitor water quality in rivers in a timely and cost-effective ma...

  6. Metric-based no-reference quality assessment of heterogeneous document images

    NASA Astrophysics Data System (ADS)

    Nayef, Nibal; Ogier, Jean-Marc

    2015-01-01

    No-reference image quality assessment (NR-IQA) aims at computing an image quality score that best correlates with either human perceived image quality or an objective quality measure, without any prior knowledge of reference images. Although learning-based NR-IQA methods have achieved the best state-of-the-art results so far, those methods perform well only on the datasets on which they were trained. The datasets usually contain homogeneous documents, whereas in reality, document images come from different sources. It is unrealistic to collect training samples of images from every possible capturing device and every document type. Hence, we argue that a metric-based IQA method is more suitable for heterogeneous documents. We propose a NR-IQA method with the objective quality measure of OCR accuracy. The method combines distortion-specific quality metrics. The final quality score is calculated taking into account the proportions of, and the dependency among different distortions. Experimental results show that the method achieves competitive results with learning-based NR-IQA methods on standard datasets, and performs better on heterogeneous documents.

  7. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  8. Optimization of image quality in breast tomosynthesis using lumpectomy and mastectomy specimens

    NASA Astrophysics Data System (ADS)

    Timberg, Pontus; Ruschin, Mark; Båth, Magnus; Hemdal, Bengt; Andersson, Ingvar; Svahn, Tony; Mattsson, Sören; Tingberg, Anders

    2007-03-01

    The purpose of this study was to determine how image quality in breast tomosynthesis (BT) is affected when acquisition modes are varied, using human breast specimens containing malignant tumors and/or microcalcifications. Images of thirty-one breast lumpectomy and mastectomy specimens were acquired on a BT prototype based on a Mammomat Novation (Siemens) full-field digital mammography system. BT image acquisitions of the same specimens were performed varying the number of projections, angular range, and detector signal collection mode (binned and nonbinned in the scan direction). An enhanced filtered back projection reconstruction method was applied with constant settings of spectral and slice thickness filters. The quality of these images was evaluated via relative visual grading analysis (VGA) human observer performance experiments using image quality criteria. Results from the relative VGA study indicate that image quality increases with number of projections and angular range. A binned detector collecting mode results in less noise, but reduced resolution of structures. Human breast specimens seem to be suitable for comparing image sets in BT with image quality criteria.

  9. Toward a Blind Deep Quality Evaluator for Stereoscopic Images Based on Monocular and Binocular Interactions.

    PubMed

    Shao, Feng; Tian, Weijun; Lin, Weisi; Jiang, Gangyi; Dai, Qionghai

    2016-05-01

    During recent years, blind image quality assessment (BIQA) has been intensively studied with different machine learning tools. Existing BIQA metrics, however, do not design for stereoscopic images. We believe this problem can be resolved by separating 3D images and capturing the essential attributes of images via deep neural network. In this paper, we propose a blind deep quality evaluator (DQE) for stereoscopic images (denoted by 3D-DQE) based on monocular and binocular interactions. The key technical steps in the proposed 3D-DQE are to train two separate 2D deep neural networks (2D-DNNs) from 2D monocular images and cyclopean images to model the process of monocular and binocular quality predictions, and combine the measured 2D monocular and cyclopean quality scores using different weighting schemes. Experimental results on four public 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment. PMID:26960225

  10. Wave aberration of human eyes and new descriptors of image optical quality and visual performance.

    PubMed

    Lombardo, Marco; Lombardo, Giuseppe

    2010-02-01

    The expansion of wavefront-sensing techniques redefined the meaning of refractive error in clinical ophthalmology. Clinical aberrometers provide detailed measurements of the eye's wavefront aberration. The distribution and contribution of each higher-order aberration to the overall wavefront aberration in the individual eye can now be accurately determined and predicted. Using corneal or ocular wavefront sensors, studies have measured the interindividual and age-related changes in the wavefront aberration in the normal population with the goal of optimizing refractive surgery outcomes for the individual. New objective optical-quality metrics would lead to better use and interpretation of newly available information on aberrations in the eye. However, the first metrics introduced, based on sets of Zernike polynomials, is not completely suitable to depict visual quality because they do not directly relate to the quality of the retinal image. Thus, several approaches to describe the real, complex optical performance of human eyes have been implemented. These include objective metrics that quantify the quality of the optical wavefront in the plane of the pupil (ie, pupil-plane metrics) and others that quantify the quality of the retinal image (ie, image-plane metrics). These metrics are derived by wavefront aberration information from the individual eye. This paper reviews the more recent knowledge of the wavefront aberration in human eyes and discusses the image-quality and optical-quality metrics and predictors that are now routinely calculated by wavefront-sensor software to describe the optical and image quality in the individual eye.

  11. Survey of mammography practice in Croatia: equipment performance, image quality and dose.

    PubMed

    Faj, Dario; Posedel, Dario; Stimac, Damir; Ivezic, Zdravko; Kasabasic, Mladen; Ivkovic, Ana; Kubelka, Dragan; Ilakovac, Vesna; Brnic, Zoran; Bjelac, Olivera Ciraj

    2008-01-01

    A national audit of mammography equipment performance, image quality and dose has been conducted in Croatia. Film-processing parameters, optical density (OD), average glandular dose (AGD) to the standard breast, viewing conditions and image quality were examined using TOR(MAM) test object. Average film gradient ranged from 2.6 to 3.7, with a mean of 3.1. Tube voltage used for imaging of the standard 45 mm polymethylmethacrylate phantom ranged from 24 to 34 kV, and OD ranged from 0.75 to 1.94 with a mean of 1.26. AGD to the standard breast ranged from 0.4 to 2.3 mGy with a mean of 1.1 mGy. Besides clinical conditions, the authors have imaged the standard phantom in the referent conditions with 28 kV and OD as close as possible to 1.5. Then, AGD ranged from 0.5 to 2.6 mGy with a mean of 1.3 mGy. Image viewing conditions were generally unsatisfying with ambient light up to 500 lx and most of the viewing boxes with luminance between 1000 and 2000 cd per m(2). TOR(MAM) scoring of images taken in clinical and referent conditions was done by local radiologists in local image viewing conditions and by the referent radiologist in good image viewing conditions. Importance of OD and image viewing conditions for diagnostic information were analysed. The survey showed that the main problem in Croatia is the lack of written quality assurance/quality control (QA/QC) procedures. Consequently, equipment performance, image quality and dose are unstable and activities to improve image quality or to reduce the dose are not evidence-based. This survey also had an educational purpose, introducing in Croatia the QC based on European Commission Guidelines.

  12. Three-dimensional volumetric display of CT data: effect of scan parameters upon image quality.

    PubMed

    Ney, D R; Fishman, E K; Magid, D; Robertson, D D; Kawashima, A

    1991-01-01

    Of the many steps involved in producing high quality three-dimensional (3D) images of CT data, the data acquisition step is of greatest consequence. The principle of "garbage in, garbage out" applies to 3D imaging--bad scanning technique produces equally bad 3D images. We present a formal study of the effect of two basic scanning parameters, slice thickness and slice spacing, on image quality. Three standard test objects were studied using variable CT scanning parameters. The objects chosen were a bone phantom, a cadaver femur with a simulated 5 mm fracture gap, and a cadaver femur with a simulated 1 mm fracture gap. Each object was scanned at three collimations: 8, 4, and 2 mm. For each collimation, four sets of scans were performed using four slice intervals: 8, 4, 3, and 2 mm. The bone phantom was scanned in two positions: oriented perpendicular to the scanning plane and oriented 45 degrees from the scanning plane. Three-dimensional images of the resulting 48 sets of data were produced using volumetric rendering. Blind review of the resultant 48 data sets was performed by three reviewers rating five factors for each image. The images resulting from scans with thin collimation and small table increments proved to rate the highest in all areas. The data obtained using 2 mm slice intervals proved to rate the highest in perceived image quality. Three millimeter slice spacing with 4 mm collimation, which clinically provides a good compromise between image quality and acquisition time and dose, also produced good perceived image quality. The studies with 8 mm slice intervals provided the least detail and introduced the worst inaccuracies and artifacts and were not suitable for clinical use. Statistical analysis demonstrated that slice interval (i.e., table incrementation) was of primary importance and slice collimation was of secondary, although significant, importance in determining perceived 3D image quality.

  13. Do SE(II) electrons really degrade SEM image quality?

    PubMed

    Bernstein, Gary H; Carter, Andrew D; Joy, David C

    2013-01-01

    Generally, in scanning electron microscopy (SEM) imaging, it is desirable that a high-resolution image be composed mainly of those secondary electrons (SEs) generated by the primary electron beam, denoted SE(I) . However, in conventional SEM imaging, other, often unwanted, signal components consisting of backscattered electrons (BSEs), and their associated SEs, denoted SE(II) , are present; these signal components contribute a random background signal that degrades contrast, and therefore signal-to-noise ratio and resolution. Ideally, the highest resolution SEM image would consist only of the SE(I) component. In SEMs that use conventional pinhole lenses and their associated Everhart-Thornley detectors, the image is composed of several components, including SE(I) , SE(II) , and some BSE, depending on the geometry of the detector. Modern snorkel lens systems eliminate the BSEs, but not the SE(II) s. We present a microfabricated diaphragm for minimizing the unwanted SE(II) signal components. We present evidence of improved imaging using a microlithographically generated pattern of Au, about 500 nm thick, that blocks most of the undesired signal components, leaving an image composed mostly of SE(I) s. We refer to this structure as a "spatial backscatter diaphragm."

  14. TU-F-9A-01: Balancing Image Quality and Dose in Radiography

    SciTech Connect

    Peck, D; Pasciak, A

    2014-06-15

    Emphasis is often placed on minimizing radiation dose in diagnostic imaging without a complete consideration of the effect on image quality, especially those that affect diagnostic accuracy. This session will include a patient image-based review of diagnostic quantities important to radiologists in conventional radiography, including the effects of body habitus, age, positioning, and the clinical indication of the exam. The relationships between image quality, radiation dose, and radiation risk will be discussed, specifically addressing how these factors are affected by image protocols and acquisition parameters and techniques. This session will also discuss some of the actual and perceived radiation risk associated with diagnostic imaging. Regardless if the probability for radiation-induced cancer is small, the fear associated with radiation persists. Also when a risk has a benefit to an individual or to society, the risk may be justified with respect to the benefit. But how do you convey the risks and the benefits to people? This requires knowledge of how people perceive risk and how to communicate the risk and the benefit to different populations. In this presentation the sources of errors in estimating risk from radiation and some methods used to convey risks are reviewed. Learning Objectives: Understand the image quality metrics that are clinically relevant to radiologists. Understand how acquisition parameters and techniques affect image quality and radiation dose in conventional radiology. Understand the uncertainties in estimates of radiation risk from imaging exams. Learn some methods for effectively communicating radiation risk to the public.

  15. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  16. Image quality improvement in megavoltage cone beam CT using an imaging beam line and a sintered pixelated array system

    SciTech Connect

    Breitbach, Elizabeth K.; Maltz, Jonathan S.; Gangadharan, Bijumon; Bani-Hashemi, Ali; Anderson, Carryn M.; Bhatia, Sudershan K.; Stiles, Jared; Edwards, Drake S.; Flynn, Ryan T.

    2011-11-15

    Purpose: To quantify the improvement in megavoltage cone beam computed tomography (MVCBCT) image quality enabled by the combination of a 4.2 MV imaging beam line (IBL) with a carbon electron target and a detector system equipped with a novel sintered pixelated array (SPA) of translucent Gd{sub 2}O{sub 2}S ceramic scintillator. Clinical MVCBCT images are traditionally acquired with the same 6 MV treatment beam line (TBL) that is used for cancer treatment, a standard amorphous Si (a-Si) flat panel imager, and the Kodak Lanex Fast-B (LFB) scintillator. The IBL produces a greater fluence of keV-range photons than the TBL, to which the detector response is more optimal, and the SPA is a more efficient scintillator than the LFB. Methods: A prototype IBL + SPA system was installed on a Siemens Oncor linear accelerator equipped with the MVision{sup TM} image guided radiation therapy (IGRT) system. A SPA strip consisting of four neighboring tiles and measuring 40 cm by 10.96 cm in the crossplane and inplane directions, respectively, was installed in the flat panel imager. Head- and pelvis-sized phantom images were acquired at doses ranging from 3 to 60 cGy with three MVCBCT configurations: TBL + LFB, IBL + LFB, and IBL + SPA. Phantom image quality at each dose was quantified using the contrast-to-noise ratio (CNR) and modulation transfer function (MTF) metrics. Head and neck, thoracic, and pelvic (prostate) cancer patients were imaged with the three imaging system configurations at multiple doses ranging from 3 to 15 cGy. The systems were assessed qualitatively from the patient image data. Results: For head and neck and pelvis-sized phantom images, imaging doses of 3 cGy or greater, and relative electron densities of 1.09 and 1.48, the CNR average improvement factors for imaging system change of TBL + LFB to IBL + LFB, IBL + LFB to IBL + SPA, and TBL + LFB to IBL + SPA were 1.63 (p < 10{sup -8}), 1.64 (p < 10{sup -13}), 2.66 (p < 10{sup -9}), respectively. For all imaging

  17. Nondestructive spectroscopic and imaging techniques for quality evaluation and assessment of fish and fish products.

    PubMed

    He, Hong-Ju; Wu, Di; Sun, Da-Wen

    2015-01-01

    Nowadays, people have increasingly realized the importance of acquiring high quality and nutritional values of fish and fish products in their daily diet. Quality evaluation and assessment are always expected and conducted by using rapid and nondestructive methods in order to satisfy both producers and consumers. During the past two decades, spectroscopic and imaging techniques have been developed to nondestructively estimate and measure quality attributes of fish and fish products. Among these noninvasive methods, visible/near-infrared (VIS/NIR) spectroscopy, computer/machine vision, and hyperspectral imaging have been regarded as powerful and effective analytical tools for fish quality analysis and control. VIS/NIR spectroscopy has been widely applied to determine intrinsic quality characteristics of fish samples, such as moisture, protein, fat, and salt. Computer/machine vision on the other hand mainly focuses on the estimation of external features like color, weight, size, and surface defects. Recently, by incorporating both spectroscopy and imaging techniques in one system, hyperspectral imaging cannot only measure the contents of different quality attributes simultaneously, but also obtain the spatial distribution of such attributes when the quality of fish samples are evaluated and measured. This paper systematically reviews the research advances of these three nondestructive optical techniques in the application of fish quality evaluation and determination and discuss future trends in the developments of nondestructive technologies for further quality characterization in fish and fish products.

  18. [Abdominal cure procedures. Adequate use of Nobecutan Spray].

    PubMed

    López Soto, Rosa María

    2009-12-01

    Open abdominal wounds, complicated by infection and/or risk of eventration tend to become chronic and usually require frequent prolonged cure. Habitual changing of bandages develop into one of the clearest risk factors leading to the deterioration of perilesional cutaneous integrity. This brings with it new complications which draw out the evolution of the process, provoking an important deterioration in quality of life for the person who suffers this and a considerable increase in health costs. What is needed is a product and a procedure which control the risk of irritation, which protect the skin, which favor a patient's comfort and which shorten treatment requirements while lowering health care expenses. This report invites medical personnel to think seriously about the scientific rationale, and treatment practice, as to why and how to apply Nobecutan adequately, this reports concludes stating the benefits in the adequate use of this product. The objective of this report is to guarantee the adequate use of this product in treatment of complicated abdominal wounds. This product responds to the needs which are present in these clinical cases favoring skin care apt isolation and protection, while at the same time, facilitating the placement and stability of dressings and bandages used to cure wounds. In order for this to happen, the correct use of this product is essential; medical personnel must pay attention to precautions and recommendations for proper application. The author's experiences in habitual handling of this product during various years, included in the procedures for standardized cures for these wounds, corroborates its usefulness; the author considers use of this product to be highly effective while being simple to apply; furthermore, one succeeds in providing quality care and optimizes resources employed.

  19. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality.

    PubMed

    Bittel, Amy M; Nickerson, Andrew; Saldivar, Isaac S; Dolman, Nick J; Nan, Xiaolin; Gibbs, Summer L

    2016-01-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance.

  20. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality

    PubMed Central

    Bittel, Amy M.; Nickerson, Andrew; Saldivar, Isaac S.; Dolman, Nick J.; Nan, Xiaolin; Gibbs, Summer L.

    2016-01-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance. PMID:27412307

  1. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality

    NASA Astrophysics Data System (ADS)

    Bittel, Amy M.; Nickerson, Andrew; Saldivar, Isaac S.; Dolman, Nick J.; Nan, Xiaolin; Gibbs, Summer L.

    2016-07-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance.

  2. LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites

    NASA Technical Reports Server (NTRS)

    Wukelic, G. E. (Principal Investigator)

    1983-01-01

    No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.

  3. Color image quality assessment with biologically inspired feature and machine learning

    NASA Astrophysics Data System (ADS)

    Deng, Cheng; Tao, Dacheng

    2010-07-01

    In this paper, we present a new no-reference quality assessment metric for color images by using biologically inspired features (BIFs) and machine learning. In this metric, we first adopt a biologically inspired model to mimic the visual cortex and represent a color image based on BIFs which unifies color units, intensity units and C1 units. Then, in order to reduce the complexity and benefit the classification, the high dimensional features are projected to a low dimensional representation with manifold learning. Finally, a multiclass classification process is performed on this new low dimensional representation of the image and the quality assessment is based on the learned classification result in order to respect the one of the human observers. Instead of computing a final note, our method classifies the quality according to the quality scale recommended by the ITU. The preliminary results show that the developed metric can achieve good quality evaluation performance.

  4. Predicted image quality of a CMOS APS X-ray detector across a range of mammographic beam qualities

    NASA Astrophysics Data System (ADS)

    Konstantinidis, A.

    2015-09-01

    Digital X-ray detectors based on Complementary Metal-Oxide- Semiconductor (CMOS) Active Pixel Sensor (APS) technology have been introduced in the early 2000s in medical imaging applications. In a previous study the X-ray performance (i.e. presampling Modulation Transfer Function (pMTF), Normalized Noise Power Spectrum (NNPS), Signal-to-Noise Ratio (SNR) and Detective Quantum Efficiency (DQE)) of the Dexela 2923MAM CMOS APS X-ray detector was evaluated within the mammographic energy range using monochromatic synchrotron radiation (i.e. 17-35 keV). In this study image simulation was used to predict how the mammographic beam quality affects image quality. In particular, the experimentally measured monochromatic pMTF, NNPS and SNR parameters were combined with various mammographic spectral shapes (i.e. Molybdenum/Molybdenum (Mo/Mo), Rhodium/Rhodium (Rh/Rh), Tungsten/Aluminium (W/Al) and Tungsten/Rhodium (W/Rh) anode/filtration combinations at 28 kV). The image quality was measured in terms of Contrast-to-Noise Ratio (CNR) using a synthetic breast phantom (4 cm thick with 50% glandularity). The results can be used to optimize the imaging conditions in order to minimize patient's Mean Glandular Dose (MGD).

  5. Medical imaging using ionizing radiation: Optimization of dose and image quality in fluoroscopy

    SciTech Connect

    Jones, A. Kyle; Balter, Stephen; Rauch, Phillip; Wagner, Louis K.

    2014-01-15

    The 2012 Summer School of the American Association of Physicists in Medicine (AAPM) focused on optimization of the use of ionizing radiation in medical imaging. Day 2 of the Summer School was devoted to fluoroscopy and interventional radiology and featured seven lectures. These lectures have been distilled into a single review paper covering equipment specification and siting, equipment acceptance testing and quality control, fluoroscope configuration, radiation effects, dose estimation and measurement, and principles of flat panel computed tomography. This review focuses on modern fluoroscopic equipment and is comprised in large part of information not found in textbooks on the subject. While this review does discuss technical aspects of modern fluoroscopic equipment, it focuses mainly on the clinical use and support of such equipment, from initial installation through estimation of patient dose and management of radiation effects. This review will be of interest to those learning about fluoroscopy, to those wishing to update their knowledge of modern fluoroscopic equipment, to those wishing to deepen their knowledge of particular topics, such as flat panel computed tomography, and to those who support fluoroscopic equipment in the clinic.

  6. Subjective image quality comparison between two digital dental radiographic systems and conventional dental film

    PubMed Central

    Ajmal, Muhammed; Elshinawy, Mohamed I.

    2014-01-01

    Objectives Digital radiography has become an integral part of dentistry. Digital radiography does not require film or dark rooms, reduces X-ray doses, and instantly generates images. The aim of our study was to compare the subjective image quality of two digital dental radiographic systems with conventional dental film. Materials & methods A direct digital (DD) ‘Digital’ system by Sirona, a semi-direct (SD) digital system by Vista-scan, and Kodak ‘E’ speed dental X-ray films were selected for the study. Endodontically-treated extracted teeth (n = 25) were used in the study. Details of enamel, dentin, dentino-enamel junction, root canal filling (gutta percha), and simulated apical pathology were investigated with the three radiographic systems. The data were subjected to statistical analyzes to reveal differences in subjective image quality. Results Conventional dental X-ray film was superior to the digital systems. For digital systems, DD imaging was superior to SD imaging. Conclusion Conventional film yielded superior image quality that was statistically significant in almost all aspects of comparison. Conventional film was followed in image quality by DD, and SD provided the lowest quality images. Conventional film is still considered the gold standard to diagnose diseases affecting the jawbone. Recommendations Improved software and hardware for digital imaging systems are now available and these improvements may now yield images that are comparable in quality to conventional film. However, we recommend that studies still use more observers and other statistical methods to produce ideal results. PMID:25382946

  7. Is a vegetarian diet adequate for children.

    PubMed

    Hackett, A; Nathan, I; Burgess, L

    1998-01-01

    The number of people who avoid eating meat is growing, especially among young people. Benefits to health from a vegetarian diet have been reported in adults but it is not clear to what extent these benefits are due to diet or to other aspects of lifestyles. In children concern has been expressed concerning the adequacy of vegetarian diets especially with regard to growth. The risks/benefits seem to be related to the degree of restriction of he diet; anaemia is probably both the main and the most serious risk but this also applies to omnivores. Vegan diets are more likely to be associated with malnutrition, especially if the diets are the result of authoritarian dogma. Overall, lacto-ovo-vegetarian children consume diets closer to recommendations than omnivores and their pre-pubertal growth is at least as good. The simplest strategy when becoming vegetarian may involve reliance on vegetarian convenience foods which are not necessarily superior in nutritional composition. The vegetarian sector of the food industry could do more to produce foods closer to recommendations. Vegetarian diets can be, but are not necessarily, adequate for children, providing vigilance is maintained, particularly to ensure variety. Identical comments apply to omnivorous diets. Three threats to the diet of children are too much reliance on convenience foods, lack of variety and lack of exercise.

  8. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  9. Image and Diagnosis Quality of X-Ray Image Transmission via Cell Phone Camera: A Project Study Evaluating Quality and Reliability

    PubMed Central

    Heck, Andreas; Hadizadeh, Dariusch R.; Weber, Oliver; Gräff, Ingo; Burger, Christof; Montag, Mareen; Koerfer, Felix; Kabir, Koroush

    2012-01-01

    Introduction Developments in telemedicine have not produced any relevant benefits for orthopedics and trauma surgery to date. For the present project study, several parameters were examined during assessment of x-ray images, which had been photographed and transmitted via cell phone. Materials and Methods A total of 100 x-ray images of various body regions were photographed with a Nokia cell phone and transmitted via email or MMS. Next, the transmitted photographs were reviewed on a laptop computer by five medical specialists and assessed regarding quality and diagnosis. Results Due to their poor quality, the transmitted MMS images could not be evaluated and this path of transmission was therefore excluded. Mean size of transmitted x-ray email images was 394 kB (range: 265–590 kB, SD ±59), average transmission time was 3.29 min ±8 (CI 95%: 1.7–4.9). Applying a score from 1–10 (very poor - excellent), mean image quality was 5.8. In 83.2±4% (mean value ± SD) of cases (median 82; 80–89%), there was agreement between final diagnosis and assessment by the five medical experts who had received the images. However, there was a markedly low concurrence ratio in the thoracic area and in pediatric injuries. Discussion While the rate of accurate diagnosis and indication for surgery was high with a concurrence ratio of 83%, considerable differences existed between the assessed regions, with lowest values for thoracic images. Teleradiology is a cost-effective, rapid method which can be applied wherever wireless cell phone reception is available. In our opinion, this method is in principle suitable for clinical use, enabling the physician on duty to agree on appropriate measures with colleagues located elsewhere via x-ray image transmission on a cell phone. PMID:23082108

  10. Blind image quality assessment: a natural scene statistics approach in the DCT domain.

    PubMed

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2012-08-01

    We develop an efficient, general-purpose, blind/noreference image quality assessment (NR-IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.

  11. Dose reduction and image quality optimizations in CT of pediatric and adult patients: phantom studies

    NASA Astrophysics Data System (ADS)

    Jeon, P.-H.; Lee, C.-L.; Kim, D.-H.; Lee, Y.-J.; Jeon, S.-S.; Kim, H.-J.

    2014-03-01

    Multi-detector computed tomography (MDCT) can be used to easily and rapidly perform numerous acquisitions, possibly leading to a marked increase in the radiation dose to individual patients. Technical options dedicated to automatically adjusting the acquisition parameters according to the patient's size are of specific interest in pediatric radiology. A constant tube potential reduction can be achieved for adults and children, while maintaining a constant detector energy fluence. To evaluate radiation dose, the weighted CT dose index (CTDIw) was calculated based on the CT dose index (CTDI) measured using an ion chamber, and image noise and image contrast were measured from a scanned image to evaluate image quality. The dose-weighted contrast-to-noise ratio (CNRD) was calculated from the radiation dose, image noise, and image contrast measured from a scanned image. The noise derivative (ND) is a quality index for dose efficiency. X-ray spectra with tube voltages ranging from 80 to 140 kVp were used to compute the average photon energy. Image contrast and the corresponding contrast-to-noise ratio (CNR) were determined for lesions of soft tissue, muscle, bone, and iodine relative to a uniform water background, as the iodine contrast increases at lower energy (i.e., k-edge of iodine is 33 keV closer to the beam energy) using mixed water-iodine contrast normalization (water 0, iodine 25, 100, 200, and 1000 HU, respectively). The proposed values correspond to high quality images and can be reduced if only high-contrast organs are assessed. The potential benefit of lowering the tube voltage is an improved CNRD, resulting in a lower radiation dose and optimization of image quality. Adjusting the tube potential in abdominal CT would be useful in current pediatric radiography, where the choice of X-ray techniques generally takes into account the size of the patient as well as the need to balance the conflicting requirements of diagnostic image quality and radiation dose

  12. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    SciTech Connect

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl; Flohr, Thomas

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phase (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum

  13. Improved quality of intrafraction kilovoltage images by triggered readout of unexposed frames

    SciTech Connect

    Poulsen, Per Rugaard; Jonassen, Johnny; Jensen, Carsten; Schmidt, Mai Lykkegaard

    2015-11-15

    Purpose: The gantry-mounted kilovoltage (kV) imager of modern linear accelerators can be used for real-time tumor localization during radiation treatment delivery. However, the kV image quality often suffers from cross-scatter from the megavoltage (MV) treatment beam. This study investigates readout of unexposed kV frames as a means to improve the kV image quality in a series of experiments and a theoretical model of the observed image quality improvements. Methods: A series of fluoroscopic images were acquired of a solid water phantom with an embedded gold marker and an air cavity with and without simultaneous radiation of the phantom with a 6 MV beam delivered perpendicular to the kV beam with 300 and 600 monitor units per minute (MU/min). An in-house built device triggered readout of zero, one, or multiple unexposed frames between the kV exposures. The unexposed frames contained part of the MV scatter, consequently reducing the amount of MV scatter accumulated in the exposed frames. The image quality with and without unexposed frame readout was quantified as the contrast-to-noise ratio (CNR) of the gold marker and air cavity for a range of imaging frequencies from 1 to 15 Hz. To gain more insight into the observed CNR changes, the image lag of the kV imager was measured and used as input in a simple model that describes the CNR with unexposed frame readout in terms of the contrast, kV noise, and MV noise measured without readout of unexposed frames. Results: Without readout of unexposed kV frames, the quality of intratreatment kV images decreased dramatically with reduced kV frequencies due to MV scatter. The gold marker was only visible for imaging frequencies ≥3 Hz at 300 MU/min and ≥5 Hz for 600 MU/min. Visibility of the air cavity required even higher imaging frequencies. Readout of multiple unexposed frames ensured visibility of both structures at all imaging frequencies and a CNR that was independent of the kV frame rate. The image lag was 12.2%, 2

  14. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  15. Image quality assessment in panoramic dental radiography: a comparative study between conventional and digital systems.

    PubMed

    Sabarudin, Akmal; Tiau, Yu Jin

    2013-02-01

    This study is designed to compare and evaluate the diagnostic image quality of dental panoramic radiography between conventional and digital systems. Fifty-four panoramic images were collected and divided into three groups consisting of conventional, digital with and without post processing image. Each image was printed out and scored subjectively by two experienced dentists who were blinded to the exposure parameters and system protocols. The evaluation covers of anatomical coverage and structures, density and image contrast. The overall image quality score revealed that digital panoramic with post-processing scored the highest of 3.45±0.19, followed by digital panoramic system without post-processing and conventional panoramic system with corresponding scores of 3.33±0.33 and 2.06±0.40. In conclusion, images produced by digital panoramic system are better in diagnostic image quality than that from conventional panoramic system. Digital post-processing visualization can improve diagnostic quality significantly in terms of radiographic density and contrast.

  16. Rapid Assessment of Tablet Film Coating Quality by Multispectral UV Imaging.

    PubMed

    Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Rehder, Soenke; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S

    2016-08-01

    Chemical imaging techniques are beneficial for control of tablet coating layer quality as they provide spectral and spatial information and allow characterization of various types of coating defects. The purpose of this study was to assess the applicability of multispectral UV imaging for assessment of the coating layer quality of tablets. UV images were used to detect, characterize, and localize coating layer defects such as chipped parts, inhomogeneities, and cracks, as well as to evaluate the coating surface texture. Acetylsalicylic acid tablets were prepared on a rotary tablet press and coated with a polyvinyl alcohol-polyethylene glycol graft copolymer using a pan coater. It was demonstrated that the coating intactness can be assessed accurately and fast by UV imaging. The different types of coating defects could be differentiated and localized based on multivariate image analysis and Soft Independent Modeling by Class Analogy applied to the UV images. Tablets with inhomogeneous texture of the coating could be identified and distinguished from those with a homogeneous surface texture. Consequently, UV imaging was shown to be well-suited for monitoring of the tablet coating layer quality. UV imaging is a promising technique for fast quality control of the tablet coating because of the high data acquisition speed and its nondestructive analytical nature.

  17. Effect of masking phase-only holograms on the quality of reconstructed images.

    PubMed

    Deng, Yuanbo; Chu, Daping

    2016-04-20

    A phase-only hologram modulates the phase of the incident light and diffracts it efficiently with low energy loss because of the minimum absorption. Much research attention has been focused on how to generate phase-only holograms, and little work has been done to understand the effect and limitation of their partial implementation, possibly due to physical defects and constraints, in particular as in the practical situations where a phase-only hologram is confined or needs to be sliced or tiled. The present study simulates the effect of masking phase-only holograms on the quality of reconstructed images in three different scenarios with different filling factors, filling positions, and illumination intensity profiles. Quantitative analysis confirms that the width of the image point spread function becomes wider and the image quality decreases, as expected, when the filling factor decreases, and the image quality remains the same for different filling positions as well. The width of the image point spread function as derived from different filling factors shows a consistent behavior to that as measured directly from the reconstructed image, especially as the filling factor becomes small. Finally, mask profiles of different shapes and intensity distributions are shown to have more complicated effects on the image point spread function, which in turn affects the quality and textures of the reconstructed image. PMID:27140082

  18. The use of modern electronic flat panel devices for image guided radiation therapy:. Image quality comparison, intra fraction motion monitoring and quality assurance applications

    NASA Astrophysics Data System (ADS)

    Nill, S.; Stützel, J.; Häring, P.; Oelfke, U.

    2008-06-01

    With modern radiotherapy delivery techniques like intensity modulated radiotherapy (IMRT) it is possible to delivery a more conformal dose distribution to the tumor while better sparing the organs at risk (OAR) compared to 3D conventional radiation therapy. Due to the theoretically high dose conformity achievable it is very important to know the exact position of the target volume during the treatment. With more and more modern linear accelerators equipped with imaging devices this is now almost possible. These imaging devices are using energies between 120kV and 6MV and therefore different detector systems are used but the vast majority is using amorphous silicon flat panel devices with different scintilator screens and build up materials. The technical details and the image quality of these systems are discussed and first results of the comparison are presented. In addition new methods to deal with motion management and quality assurance procedures are shortly discussed.

  19. Fixed-quality/variable bit-rate on-board image compression for future CNES missions

    NASA Astrophysics Data System (ADS)

    Camarero, Roberto; Delaunay, Xavier; Thiebaut, Carole

    2012-10-01

    The huge improvements in resolution and dynamic range of current [1][2] and future CNES remote sensing missions (from 5m/2.5m in Spot5 to 70cm in Pleiades) illustrate the increasing need of efficient on-board image compressors. Many techniques have been considered by CNES during the last years in order to go beyond usual compression ratios: new image transforms or post-transforms [3][4], exceptional processing [5], selective compression [6]. However, even if significant improvements have been obtained, none of those techniques has ever contested an essential drawback in current on-board compression schemes: fixed-rate (or compression ratio). This classical assumption provides highly-predictable data volumes that simplify storage and transmission. But on the other hand, it demands to compress every image-segment (strip) of the scene within the same amount of data. Therefore, this fixed bit-rate is dimensioned on the worst case assessments to guarantee the quality requirements in all areas of the image. This is obviously not the most economical way of achieving the required image quality for every single segment. Thus, CNES has started a study to re-use existing compressors [7] in a Fixed-Quality/Variable bit-rate mode. The main idea is to compute a local complexity metric in order to assign the optimum bit-rate to comply with quality requirements. Consequently, complex areas are less compressed than simple ones, offering a better image quality for an equivalent global bit-rate. "Near-lossless bit-rate" of image segments has revealed as an efficient image complexity estimator. It links quality criteria and bit-rates through a single theoretical relationship. Compression parameters are thus automatically computed in accordance with the quality requirements. In addition, this complexity estimator could be implemented in a one-pass compression and truncation scheme.

  20. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    SciTech Connect

    Vaishnav, J. Y. Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  1. Occupational and patient exposure as well as image quality for full spine examinations with the EOS imaging system

    SciTech Connect

    Damet, J. Fournier, P.; Monnin, P.; Sans-Merce, M.; Verdun, F. R.; Baechler, S.; Ceroni, D.; Zand, T.

    2014-06-15

    Purpose: EOS (EOS imaging S.A, Paris, France) is an x-ray imaging system that uses slot-scanning technology in order to optimize the trade-off between image quality and dose. The goal of this study was to characterize the EOS system in terms of occupational exposure, organ doses to patients as well as image quality for full spine examinations. Methods: Occupational exposure was determined by measuring the ambient dose equivalents in the radiological room during a standard full spine examination. The patient dosimetry was performed using anthropomorphic phantoms representing an adolescent and a five-year-old child. The organ doses were measured with thermoluminescent detectors and then used to calculate effective doses. Patient exposure with EOS was then compared to dose levels reported for conventional radiological systems. Image quality was assessed in terms of spatial resolution and different noise contributions to evaluate the detector's performances of the system. The spatial-frequency signal transfer efficiency of the imaging system was quantified by the detective quantum efficiency (DQE). Results: The use of a protective apron when the medical staff or parents have to stand near to the cubicle in the radiological room is recommended. The estimated effective dose to patients undergoing a full spine examination with the EOS system was 290μSv for an adult and 200 μSv for a child. MTF and NPS are nonisotropic, with higher values in the scanning direction; they are in addition energy-dependent, but scanning speed independent. The system was shown to be quantum-limited, with a maximum DQE of 13%. The relevance of the DQE for slot-scanning system has been addressed. Conclusions: As a summary, the estimated effective dose was 290μSv for an adult; the image quality remains comparable to conventional systems.

  2. TH-A-16A-01: Image Quality for the Radiation Oncology Physicist: Review of the Fundamentals and Implementation

    SciTech Connect

    Seibert, J; Imbergamo, P

    2014-06-15

    The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, high contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.

  3. Investigation into the impact of tone reproduction on the perceived image quality of fine art reproductions

    NASA Astrophysics Data System (ADS)

    Farnand, Susan; Jiang, Jun; Frey, Franziska

    2012-01-01

    A project, supported by the Andrew W. Mellon Foundation, evaluating current practices in fine art image reproduction, determining the image quality generally achievable, and establishing a suggested framework for art image interchange was recently completed. (Information regarding the Mellon project and related work may be found at www.artimaging.rit.edu.) To determine the image quality currently being achieved, experimentation was conducted in which a set of objective targets and pieces of artwork in various media were imaged by participating museums and other cultural heritage institutions. Prints and images for display made from the delivered image files at the Rochester Institute of Technology were used as stimuli in psychometric testing in which observers were asked to evaluate the prints as reproductions of the original artwork and as stand alone images. The results indicated that there were limited differences between assessments made with and without the original present for printed reproductions. For displayed images, the differences were more significant with lower contrast images being ranked lower and higher contrast images generally ranked higher when the original was not present. This was true for experiments conducted both in a dimly lit laboratory as well as via the web, indicating that more than viewing conditions were driving this shift.

  4. Development and measurement of the goodness of test images for visual print quality evaluation

    NASA Astrophysics Data System (ADS)

    Halonen, Raisa; Nuutinen, Mikko; Asikainen, Reijo; Oittinen, Pirkko

    2010-01-01

    The aim of the study was to develop a test image for print quality evaluation to improve the current state of the art in testing the quality of digital printing. The image presented by the authors in EI09 portrayed a breakfast scene, the content of which could roughly be divided in four object categories: a woman, a table with objects, a landscape picture and a gray wall. The image was considered to have four main areas of improvement: the busyness of the image, the control of the color world, the salience of the object categories, and the naturalness of the event and the setting. To improve the first image, another test image was developed. Whereas several aspects were improved, the shortcomings of the new image found by visual testing and self-report were in the same four areas. To combine the insights of the two test images and to avoid their pitfalls, a third image was developed. The goodness of the three test images was measured in subjective tests. The third test image was found to address efficiently three of the four improvement areas, only the salience of the objects left a bit to be desired.

  5. Influence of partial k-space filling on the quality of magnetic resonance images*

    PubMed Central

    Jornada, Tiago da Silva; Murata, Camila Hitomi; Medeiros, Regina Bitelli

    2016-01-01

    Objective To study the influence that the scan percentage tool used in partial k-space acquisition has on the quality of images obtained with magnetic resonance imaging equipment. Materials and Methods A Philips 1.5 T magnetic resonance imaging scanner was used in order to obtain phantom images for quality control tests and images of the knee of an adult male. Results There were no significant variations in the uniformity and signal-to-noise ratios with the phantom images. However, analysis of the high-contrast spatial resolution revealed significant degradation when scan percentages of 70% and 85% were used in the acquisition of T1- and T2-weighted images, respectively. There was significant degradation when a scan percentage of 25% was used in T1- and T2-weighted in vivo images (p ≤ 0.01 for both). Conclusion The use of tools that limit the k-space is not recommended without knowledge of their effect on image quality. PMID:27403015

  6. Quality assessment of remote sensing image fusion using feature-based fourth-order correlation coefficient

    NASA Astrophysics Data System (ADS)

    Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing

    2016-04-01

    In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.

  7. Image forgery detection by means of no-reference quality metrics

    NASA Astrophysics Data System (ADS)

    Battisti, F.; Carli, M.; Neri, A.

    2012-03-01

    In this paper a methodology for digital image forgery detection by means of an unconventional use of image quality assessment is addressed. In particular, the presence of differences in quality degradations impairing the images is adopted to reveal the mixture of different source patches. The ratio behind this work is in the hypothesis that any image may be affected by artifacts, visible or not, caused by the processing steps: acquisition (i.e., lens distortion, acquisition sensors imperfections, analog to digital conversion, single sensor to color pattern interpolation), processing (i.e., quantization, storing, jpeg compression, sharpening, deblurring, enhancement), and rendering (i.e., image decoding, color/size adjustment). These defects are generally spatially localized and their strength strictly depends on the content. For these reasons they can be considered as a fingerprint of each digital image. The proposed approach relies on a combination of image quality assessment systems. The adopted no-reference metric does not require any information about the original image, thus allowing an efficient and stand-alone blind system for image forgery detection. The experimental results show the effectiveness of the proposed scheme.

  8. A cross-platform survey of CT image quality and dose from routine abdomen protocols and a method to systematically standardize image quality

    NASA Astrophysics Data System (ADS)

    Favazza, Christopher P.; Duan, Xinhui; Zhang, Yi; Yu, Lifeng; Leng, Shuai; Kofler, James M.; Bruesewitz, Michael R.; McCollough, Cynthia H.

    2015-11-01

    Through this investigation we developed a methodology to evaluate and standardize CT image quality from routine abdomen protocols across different manufacturers and models. The influence of manufacturer-specific automated exposure control systems on image quality was directly assessed to standardize performance across a range of patient sizes. We evaluated 16 CT scanners across our health system, including Siemens, GE, and Toshiba models. Using each practice’s routine abdomen protocol, we measured spatial resolution, image noise, and scanner radiation output (CTDIvol). Axial and in-plane spatial resolutions were assessed through slice sensitivity profile (SSP) and modulation transfer function (MTF) measurements, respectively. Image noise and CTDIvol values were obtained for three different phantom sizes. SSP measurements demonstrated a bimodal distribution in slice widths: an average of 6.2  ±  0.2 mm using GE’s ‘Plus’ mode reconstruction setting and 5.0  ±  0.1 mm for all other scanners. MTF curves were similar for all scanners. Average spatial frequencies at 50%, 10%, and 2% MTF values were 3.24  ±  0.37, 6.20  ±  0.34, and 7.84  ±  0.70 lp cm-1, respectively. For all phantom sizes, image noise and CTDIvol varied considerably: 6.5-13.3 HU (noise) and 4.8-13.3 mGy (CTDIvol) for the smallest phantom; 9.1-18.4 HU and 9.3-28.8 mGy for the medium phantom; and 7.8-23.4 HU and 16.0-48.1 mGy for the largest phantom. Using these measurements and benchmark SSP, MTF, and image noise targets, CT image quality can be standardized across a range of patient sizes.

  9. Evaluating the impact of x-ray spectral shape on image quality in flat-panel CT breast imaging

    SciTech Connect

    Glick, Stephen J.; Thacker, Samta; Gong Xing; Liu, Bob

    2007-01-15

    In recent years, there has been an increasing interest in exploring the feasibility of dedicated computed tomography (CT) breast imaging using a flat-panel digital detector in a truncated cone-beam imaging geometry. Preliminary results are promising and it appears as if three-dimensional tomographic imaging of the breast has great potential for reducing the masking effect of superimposed parenchymal structure typically observed with conventional mammography. In this study, a mathematical framework used for determining optimal design and acquisition parameters for such a CT breast imaging system is described. The ideal observer signal-to-noise ratio (SNR) is used as a figure of merit, under the assumptions that the imaging system is linear and shift invariant. Computation of the ideal observer SNR used a parallel-cascade model to predict signal and noise propagation through the detector, as well as a realistic model of the lesion detection task in breast imaging. For all evaluations, the total mean glandular dose for a CT breast imaging study was constrained to be approximately equivalent to that of a two-view conventional mammography study. The framework presented was used to explore the effect of x-ray spectral shape across an extensive range of kVp settings, filter material types, and filter thicknesses. The results give an indication of how spectral shape can affect image quality in flat-panel CT breast imaging.

  10. Comparison of two methods for evaluation of image quality of lumbar spine radiographs

    NASA Astrophysics Data System (ADS)

    Tingberg, Anders; Bath, Magnus; Hakansson, Markus; Medin, Joakim; Sandborg, Michael; Alm-Carlsson, Gudrun; Mattsson, S.÷ren; Mansson, Lars Gunnar

    2004-05-01

    To evaluate the image quality of clinical radiographs with two different methods, and to find correlations between the two methods. Based on fifteen lumbar spine radiographs, two new sets of images were created. A hybrid image set was created by adding two distributions of artificial lesions to each original image. The image quality parameters spatial resolution and noise were manipulated and a total of 210 hybrid images were created. A set of 105 disease-free images was created by applying the same combinations of spatial resolution and noise to the original images. The hybrid images were evaluated with the free-response forced error experiment (FFE) and the normal images with visual grading analysis (VGA) by nine experienced radiologists. The VGA study showed that images with low noise are preferred over images with higher noise levels. The alteration of the MTF had a limited influence on the VGA score. For the FFE study the visibility of the lesions was independent of the spatial resolution and the noise level. In this study we found no correlation between the two methods, probably because the detectability of the artificial lesions was not influenced by the manipulations of noise level and resolution. Hence, the detection of lesions in lumbar spine radiography may not be a quantum-noise limited task. The results show the strength of the VGA technique in terms of detecting small changes in the two image quality parameters. The method is more robust and has a higher statistical power than the ROC related method and could therefore, in some cases, be more suitable for use in optimization studies.

  11. Importance of the grayscale in early assessment of image quality gains with iterative CT reconstruction

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Guo, Z.

    2016-03-01

    Iterative reconstruction methods have become an important research topic in X-ray computed tomography (CT), due to their ability to yield improvements in image quality in comparison with the classical filtered bacprojection method. There are many ways to design an effective iterative reconstruction method. Moreover, for each design, there may be a large number of parameters that can be adjusted. Thus, early assessment of image quality, before clinical deployment, plays a large role in identifying and refining solutions. Currently, there are few publications reporting on early, task-based assessment of image quality achieved with iterative reconstruction methods. We report here on such an assessment, and we illustrate at the same time the importance of the grayscale used for image display when conducting this type of assessment. Our results further support observations made by others that the edge preserving penalty term used in iterative reconstruction is a key ingredient to improving image quality in terms of detection task. Our results also provide a clear demonstration of an implication made in one of our previous publications, namely that the grayscale window plays an important role in image quality comparisons involving iterative CT reconstruction methods.

  12. Calibration and adaptation of ISO visual noise for I3A's Camera Phone Image Quality initiative

    NASA Astrophysics Data System (ADS)

    Baxter, Donald J.; Murray, Andrew

    2012-01-01

    The I3A Camera Phone Image Quality (CPIQ) visual noise metric described is a core image quality attribute of the wider I3A CPIQ consumer orientated, camera image quality score. This paper describes the selection of a suitable noise metric, the adaptation of the chosen ISO 15739 visual noise protocol for the challenges posed by cell phone cameras and the mapping of the adapted protocol to subjective image quality loss using a published noise study. Via a simple study, visual noise metrics are shown to discriminate between different noise frequency shapes. The optical non-uniformities prevalent in cell phone cameras and higher noise levels pose significant challenges to the ISO 15739 visual noise protocol. The non-uniformities are addressed using a frequency based high pass filter. Secondly, the data clipping at high noise levels is avoided using a Johnson and Fairchild frequency based Luminance contrast sensitivity function (CSF). The final result is a visually based noise metric calibrated in Quality Loss Just Noticeable Differences (JND) using Aptina Imaging's subjectively calibrated image set.

  13. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems.

    PubMed

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A

    2016-07-01

    The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment.

  14. Image integrity and aesthetics: towards a more encompassing definition of visual quality

    NASA Astrophysics Data System (ADS)

    Redi, Judith A.; Heynderickx, Ingrid

    2012-03-01

    Visual quality is a multifaceted quantity that depends on multiple attributes of the image/video. According to Keelan's definition, artifactual attributes concern features of the image that when visible, are annoying and compromise the integrity of the image. Aesthetic attributes instead depend on the observer's personal taste. Both types of attributes have been studied in the literature in relation to visual quality, but never in conjunction with each other. In this paper we perform a psychometric experiment to investigate how artifactual and aesthetic attributes interact, and how they affect the viewing behavior. In particular, we studied to what extent the appearance of artifacts impacts the aesthetic quality of images. Our results indicate that indeed image integrity somehow influences the aesthetic quality scores. By means of an eye-tracker, we also recorded and analyzed the viewing behavior of our participants while scoring aesthetic quality. Results reveal that, when scoring aesthetic quality, viewing behavior significantly departs from the natural free looking, as well as from the viewing behavior observed for integrity scoring.

  15. Adequate mathematical modelling of environmental processes

    NASA Astrophysics Data System (ADS)

    Chashechkin, Yu. D.

    2012-04-01

    In environmental observations and laboratory visualization both large scale flow components like currents, jets, vortices, waves and a fine structure are registered (different examples are given). The conventional mathematical modeling both analytical and numerical is directed mostly on description of energetically important flow components. The role of a fine structures is still remains obscured. A variety of existing models makes it difficult to choose the most adequate and to estimate mutual assessment of their degree of correspondence. The goal of the talk is to give scrutiny analysis of kinematics and dynamics of flows. A difference between the concept of "motion" as transformation of vector space into itself with a distance conservation and the concept of "flow" as displacement and rotation of deformable "fluid particles" is underlined. Basic physical quantities of the flow that are density, momentum, energy (entropy) and admixture concentration are selected as physical parameters defined by the fundamental set which includes differential D'Alembert, Navier-Stokes, Fourier's and/or Fick's equations and closing equation of state. All of them are observable and independent. Calculations of continuous Lie groups shown that only the fundamental set is characterized by the ten-parametric Galilelian groups reflecting based principles of mechanics. Presented analysis demonstrates that conventionally used approximations dramatically change the symmetries of the governing equations sets which leads to their incompatibility or even degeneration. The fundamental set is analyzed taking into account condition of compatibility. A high order of the set indicated on complex structure of complete solutions corresponding to physical structure of real flows. Analytical solutions of a number problems including flows induced by diffusion on topography, generation of the periodic internal waves a compact sources in week-dissipative media as well as numerical solutions of the same

  16. COMPARISON OF WIRELESS DETECTORS FOR DIGITAL RADIOGRAPHY SYSTEMS: IMAGE QUALITY AND DOSE.

    PubMed

    Mourik, J E M; van der Tol, P; Veldkamp, W J H; Geleijns, J

    2016-06-01

    The purpose of this study was to compare dose and image quality of wireless detectors for digital chest radiography. Entrance dose at both the detector (EDD) and phantom (EPD) and image quality were measured for wireless detectors of seven different vendors. Both the local clinical protocols and a reference protocol were evaluated. In addition, effective dose was calculated. Main differences in clinical protocols involved tube voltage, tube current, the use of a small or large focus and the use of additional filtration. For the clinical protocols, large differences in EDD (1.4-11.8 µGy), EPD (13.9-80.2 µGy) and image quality (IQFinv: 1.4-4.1) were observed. Effective dose was <0.04 mSv for all protocols. Large differences in performance were observed between the seven different systems. Although effective dose is low, further improvement of imaging technology and acquisition protocols is warranted for optimisation of digital chest radiography.

  17. Hyperspectral laser-induced flourescence imaging for assessing internal quality of kiwi fruit

    NASA Astrophysics Data System (ADS)

    Liu, Muhua; Liao, Yifeng; Zhou, Xiaomei

    2008-03-01

    This paper describes an experimental study on non-destructive methods for predicting quality of kiwifruits using fluorescence imaging. The method is based on hyperspectral laser-induced fluorescence imaging in the region between 700 and 1110 nm, and estimates the kiwifruits quality in terms of internal sugar content and firmness. A station for acquiring hyperspectral laser-induced fluorescence imaging has been designed and carefully choosing each component. The fluorescence imaging acquired by the station has been pre-processed by selecting regions of interest (ROIs) of 50 100 × pixels. A line regressing prediction method estimates the quality of kiwifruit samples. The results obtained in classification show that the station and prediction model enables the correct discrimination of kiwifruits internal sugar content and firmness with a percentage of r= 98.5%, SEP=0.4 and r=99.9%, SEP=0.62.

  18. Image processing developments and applications for water quality monitoring and trophic state determination

    NASA Technical Reports Server (NTRS)

    Blackwell, R. J.

    1982-01-01

    Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.

  19. JPEG vs. JPEG 2000: an objective comparison of image encoding quality

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan

    2004-11-01

    This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

  20. Fast stochastic Wiener filter for superresolution image restoration with information theoretic visual quality assessment

    NASA Astrophysics Data System (ADS)

    Yousef, Amr H.; Li, Jiang; Karim, Mohammad

    2012-05-01

    Super-resolution (SR) refers to reconstructing a single high resolution (HR) image from a set of subsampled, blurred and noisy low resolution (LR) images. The reconstructed image suffers from degradations such as blur, aliasing, photo-detector noise and registration and fusion error. Wiener filter can be used to remove artifacts and enhance the visual quality of the reconstructed images. In this paper, we introduce a new fast stochasticWiener filter for SR reconstruction and restoration that can be implemented efficiently in the frequency domain. Our derivation depends on the continuous-discrete-continuous (CDC) model that represents most of the degradations encountered during the image-gathering and image-display processes. We incorporate a new parameter that accounts for LR images registration and fusion errors. Also, we speeded up the performance of the filter by constraining it to work on small patches of the images. Beside this, we introduce two figures of merits: information rate and maximum realizable fidelity, which can be used to assess the visual quality of the resultant images. Simulations and experimental results demonstrate that the derived Wiener filter that can be implemented efficiently in the frequency domain can reduce aliasing, blurring, and noise and result in a sharper reconstructed image. Also, Quantitative assessment using the proposed figures coincides with the visual qualitative assessment. Finally, we evaluate our filter against other SR techniques and its results were very competitive.

  1. Impact of 4D image quality on the accuracy of target definition.

    PubMed

    Nielsen, Tine Bjørn; Hansen, Christian Rønn; Westberg, Jonas; Hansen, Olfred; Brink, Carsten

    2016-03-01

    Delineation accuracy of target shape and position depends on the image quality. This study investigates whether the image quality on standard 4D systems has an influence comparable to the overall delineation uncertainty. A moving lung target was imaged using a dynamic thorax phantom on three different 4D computed tomography (CT) systems and a 4D cone beam CT (CBCT) system using pre-defined clinical scanning protocols. Peak-to-peak motion and target volume were registered using rigid registration and automatic delineation, respectively. A spatial distribution of the imaging uncertainty was calculated as the distance deviation between the imaged target and the true target shape. The measured motions were smaller than actual motions. There were volume differences of the imaged target between respiration phases. Imaging uncertainties of >0.4 cm were measured in the motion direction which showed that there was a large distortion of the imaged target shape. Imaging uncertainties of standard 4D systems are of similar size as typical GTV-CTV expansions (0.5-1 cm) and contribute considerably to the target definition uncertainty. Optimising and validating 4D systems is recommended in order to obtain the most optimal imaged target shape.

  2. Self-induced thermal distortion effects on target image quality.

    PubMed

    Gebhardt, F G

    1972-06-01

    Experimental results are reported that show the effects of the self-induced thermal lens due to a high power laser beam on imaging or tracking systems viewing along the same propagation path. The thermal distortion effects of a wind are simulated with a low power ( less, similar 3-W) CO(2) laser beam propagating through a cell of liquid CS(2) moving across the beam. The resulting image distortion includes a warping effect analogous to the deflection of the CO(2) beam, together with a pronounced demagnification of the central portion of the object. An active optical tracker is simulated with a He-Ne laser beam propagating collinearly with the CO(2) beam. The He-Ne beam pattern returned from a specular target is distorted and sharply confined to the outline of the crescent shaped CO(2) beam. Simple ray optics models are used to provide qualitative explanations for the experimental results.

  3. Software to model AXAF-I image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  4. Toward a No-Reference Image Quality Assessment Using Statistics of Perceptual Color Descriptors.

    PubMed

    Lee, Dohyoung; Plataniotis, Konstantinos N

    2016-08-01

    Analysis of the statistical properties of natural images has played a vital role in the design of no-reference (NR) image quality assessment (IQA) techniques. In this paper, we propose parametric models describing the general characteristics of chromatic data in natural images. They provide informative cues for quantifying visual discomfort caused by the presence of chromatic image distortions. The established models capture the correlation of chromatic data between spatially adjacent pixels by means of color invariance descriptors. The use of color invariance descriptors is inspired by their relevance to visual perception, since they provide less sensitive descriptions of image scenes against viewing geometry and illumination variations than luminances. In order to approximate the visual quality perception of chromatic distortions, we devise four parametric models derived from invariance descriptors representing independent aspects of color perception: 1) hue; 2) saturation; 3) opponent angle; and 4) spherical angle. The practical utility of the proposed models is examined by deploying them in our new general-purpose NR IQA metric. The metric initially estimates the parameters of the proposed chromatic models from an input image to constitute a collection of quality-aware features (QAF). Thereafter, a machine learning technique is applied to predict visual quality given a set of extracted QAFs. Experimentation performed on large-scale image databases demonstrates that the proposed metric correlates well with the provided subjective ratings of image quality over commonly encountered achromatic and chromatic distortions, indicating that it can be deployed on a wide variety of color image processing problems as a generalized IQA solution. PMID:27305678

  5. Microtomographic imaging of multiphase flow in porous media: Validation of image analysis algorithms, and assessment of data representativeness and quality

    NASA Astrophysics Data System (ADS)

    Wildenschild, D.; Porter, M. L.

    2009-04-01

    Significant strides have been made in recent years in imaging fluid flow in porous media using x-ray computerized microtomography (CMT) with 1-20 micron resolution; however, difficulties remain in combining representative sample sizes with optimal image resolution and data quality; and in precise quantification of the variables of interest. Tomographic imaging was for many years focused on volume rendering and the more qualitative analyses necessary for rapid assessment of the state of a patient's health. In recent years, many highly quantitative CMT-based studies of fluid flow processes in porous media have been reported; however, many of these analyses are made difficult by the complexities in processing the resulting grey-scale data into reliable applicable information such as pore network structures, phase saturations, interfacial areas, and curvatures. Yet, relatively few rigorous tests of these analysis tools have been reported so far. The work presented here was designed to evaluate the effect of image resolution and quality, as well as the validity of segmentation and surface generation algorithms as they were applied to CMT images of (1) a high-precision glass bead pack and (2) gas-fluid configurations in a number of glass capillary tubes. Interfacial areas calculated with various algorithms were compared to actual interfacial geometries and we found very good agreement between actual and measured surface and interfacial areas. (The test images used are available for download at the website listed below). http://cbee.oregonstate.edu/research/multiphase_data/index.html

  6. Collection and processing data for high quality CCD images.

    SciTech Connect

    Doerry, Armin Walter

    2007-03-01

    Coherent Change Detection (CCD) with Synthetic Aperture Radar (SAR) images is a technique whereby very subtle temporal changes can be discerned in a target scene. However, optimal performance requires carefully matching data collection geometries and adjusting the processing to compensate for imprecision in the collection geometries. Tolerances in the precision of the data collection are discussed, and anecdotal advice is presented for optimum CCD performance. Processing considerations are also discussed.

  7. Calculating Contrast Stretching Variables in Order to Improve Dental Radiology Image Quality

    NASA Astrophysics Data System (ADS)

    Widodo, Haris B.; Soelaiman, Arief; Ramadhani, Yogi; Supriyanti, Retno

    2016-01-01

    Teeth are one of the body's digestive tract that serves as a softener food that can be digested easily. One branch of science that was instrumental in the treatment and diagnosis of teeth is Dental Radiology. However, in reality many dental radiology images has low resolution, thus inhibiting in making diagnosis of dental disease perfectly. This research aims to improve low resolution dental radiology image using image processing techniques. This paper discussed the use of contrast stretching method to improve the dental radiology image quality, especially relating to the calculation of the variable contrast stretching method. The results showed that contrast stretching method is promising for use in improving the image quality in a simple but efficient.

  8. New StatPhantom software for assessment of digital image quality

    NASA Astrophysics Data System (ADS)

    Gurvich, Victor A.; Davydenko, George I.

    2002-04-01

    The rapid development of digital imaging and computers networks, using Picture Archiving and Communication Systems (PACS) and DICOM compatible devices increase requirements to the quality control process in medical imaging departments, but provide new opportunities for evaluation of image quality. New StatPhantom software simplifies statistical techniques based on modern detection theory and ROC analysis improving the accuracy and reliability of known methods and allowing to implement statistical analysis with phantoms of any design. In contrast to manual statistical methods, all calculation, analysis of results, and test elements positions changes in the image of phantom are implemented by computer. This paper describes the user interface and functionality of StatPhantom software, its opportunities and advantages in the assessment of various imaging modalities, and the diagnostic preference of an observer. The results obtained by the conventional ROC analysis, manual, and computerized statistical methods are analyzed. Different designs of phantoms are considered.

  9. Real-time Strehl and image quality performance estimator at Paranal Observatory

    NASA Astrophysics Data System (ADS)

    Mawet, Dimitri; Smette, Alain; Sarazin, Marc S.; Kuntschner, Harald; Girard, Julien H.

    2014-08-01

    Here we describe a prototype Strehl and image quality performance estimator and its integration into Paranal operations, starting with UT4 and its suite of three infrared instruments: adaptive optics-fed imager/spectrograph NACO (temporarily out of operations) and integral field unit SINFONI, as well as wide-field imager HAWK-I. The real-time estimator processes the ambient conditions (seeing, coherence time, airmass, etc.) from the DIMM, and telescope Shack-Hartmann image analyzer to produce estimates of image quality and Strehl ratio every ~ 30 seconds. The estimate is using ad-hoc instrumental models, based in part on the PAOLA adaptive optics simulator. We discuss the current performance of the estimator vs real IQ and Strehl measurements, its impact on service mode efficiency, prospects for full deployment at other UTs, its use for the adaptive optics facility (AOF), and inclusion of the SLODAR-measured fine turbulence characteristics.

  10. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  11. Image gathering and coding for digital restoration: Information efficiency and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Mccormick, Judith A.; Narayanswamy, Ramkumar

    1989-01-01

    Image gathering and coding are commonly treated as tasks separate from each other and from the digital processing used to restore and enhance the images. The goal is to develop a method that allows us to assess quantitatively the combined performance of image gathering and coding for the digital restoration of images with high visual quality. Digital restoration is often interactive because visual quality depends on perceptual rather than mathematical considerations, and these considerations vary with the target, the application, and the observer. The approach is based on the theoretical treatment of image gathering as a communication channel (J. Opt. Soc. Am. A2, 1644(1985);5,285(1988). Initial results suggest that the practical upper limit of the information contained in the acquired image data range typically from approximately 2 to 4 binary information units (bifs) per sample, depending on the design of the image-gathering system. The associated information efficiency of the transmitted data (i.e., the ratio of information over data) ranges typically from approximately 0.3 to 0.5 bif per bit without coding to approximately 0.5 to 0.9 bif per bit with lossless predictive compression and Huffman coding. The visual quality that can be attained with interactive image restoration improves perceptibly as the available information increases to approximately 3 bifs per sample. However, the perceptual improvements that can be attained with further increases in information are very subtle and depend on the target and the desired enhancement.

  12. Assessment of objective image quality in digital radiography: noninvasive determination of the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Kamm, Karl-Friedrich; Steiner, Reinhard; Tilkorn, Karl

    1996-04-01

    In order to determine an objective measure of a system's image quality, we developed a simple, non-invasive measurement procedure to determine the detective quantum efficiency of digital radiographic systems, especially image intensifier-tv systems. Therefore we set up measurement procedures for the quantities intensity transfer function (ITF) (also called characteristic curve), modulation transfer function (MTF), noise power spectrum (NPS) and the low frequency drop (LFD). The quantities ITF, MTF, NPS and LFD are determined by the analysis of images of simple, standardized test objects (a slit, Al-filters of different thickness and a lead disk). The images are automatically evaluated by means of an Apple Macintosh workstation and the program NIH image with some special extensions. The resulting quantities MTF, NPS, LFD are combined to determine the noise equivalent quanta (NEQ) and the detective quantum efficiency (DQE). By means of this measurement procedure quantities, that describe the objective image quality like NEQ and DQE, can be determined in a simple way. Only a set of 45 images is needed for diagnosis of a system. This method provides a powerful analysis tool for image quality, that is applicable in the field and can be done from a remote location. It may be used in a clinical environment (e.g. in constancy testing).

  13. Concepts for evaluation of image quality in digital radiology

    NASA Astrophysics Data System (ADS)

    Zscherpel, U.; Ewert, U.; Jechow, M.

    2012-05-01

    Concepts for digital image evaluation are presented for Computed Radiography (CR) and Digital Detector Arrays (DDAs) used for weld inspection. The precise DDA calibration yields an extra ordinary increase of contrast sensitivity up to 10 times in relation to film radiography. Restrictions in spatial resolution caused by pixel size of the DDA are compensated by increased contrast sensitivity. First CR standards were published in 2005 to support the application of phosphor imaging plates in lieu of X-ray film, but they need already a revision based on experiences reported by many users. One of the key concepts is the usage of signal-to-noise (SNR) measurements as equivalent to the optical density of film and film system class. The contrast sensitivity, measured by IQI visibility, depends on three essential parameters: The basic spatial resolution (SRb) of the radiographic image, the achieved signal-to-noise ratio (SNR) and the specific contrast (μeff - effective attenuation coefficient). Knowing these 3 parameters for the given exposure condition, inspected material and monitor viewing condition permits the calculation of the just visible IQI element. Furthermore, this enables the optimization of exposure conditions. The new ISO/FDIS 17636-2 describes the practice for digital radiography with CR and DDAs. It considers the first time compensation principles, derived from the three essential parameters. The consequences are described.

  14. Telescope polarization and image quality: Lyot coronagraph performance

    NASA Astrophysics Data System (ADS)

    Breckinridge, J. B.; Chipman, R. A.

    2016-07-01

    In this paper we apply a vector representation of physical optics, sometimes called polarization aberration theory to study image formation in astronomical telescopes and instruments. We describe image formation in-terms of interferometry and use the Fresnel polarization equations to show how light, upon propagation through an optical system become partially polarized. We make the observation that orthogonally polarized light does not interfere to form an intensity image. We show how the two polarization aberrations (diattenuation and and retardance) distort the system PSF, decrease transmittance, and increase unwanted background above that predicted using the nonphysical scalar models. We apply the polarization aberration theory (PolAbT) described earlier (Breckinridge, Lam and Chipman, 2015, PASP 127, 445-468) to the fore-optics of the system designed for AFTA-WFIRST- CGI to obtain a performance estimate. Analysis of the open-literature design using PolAbT leads us to estimate that the WFIRST-CGI contrast will be in the 10-5 regime at the occulting mask. Much above the levels predicted by others (Krist, Nemati and Mennesson, 2016, JATIS 2, 011003). Remind the reader: 1. Polarizers are operators, not filters in the same sense as colored filters, 2. Adaptive optics does not correct polarization aberrations, 3. Calculations of both diattenuation and retardance are needed to model real-world telescope/coronagraph systems.

  15. High Fidelity System Modeling for High Quality Image Reconstruction in Clinical CT

    PubMed Central

    Do, Synho; Karl, William Clem; Singh, Sarabjeet; Kalra, Mannudeep; Brady, Tom; Shin, Ellie; Pien, Homer

    2014-01-01

    Today, while many researchers focus on the improvement of the regularization term in IR algorithms, they pay less concern to the improvement of the fidelity term. In this paper, we hypothesize that improving the fidelity term will further improve IR image quality in low-dose scanning, which typically causes more noise. The purpose of this paper is to systematically test and examine the role of high-fidelity system models using raw data in the performance of iterative image reconstruction approach minimizing energy functional. We first isolated the fidelity term and analyzed the importance of using focal spot area modeling, flying focal spot location modeling, and active detector area modeling as opposed to just flying focal spot motion. We then compared images using different permutations of all three factors. Next, we tested the ability of the fidelity terms to retain signals upon application of the regularization term with all three factors. We then compared the differences between images generated by the proposed method and Filtered-Back-Projection. Lastly, we compared images of low-dose in vivo data using Filtered-Back-Projection, Iterative Reconstruction in Image Space, and the proposed method using raw data. The initial comparison of difference maps of images constructed showed that the focal spot area model and the active detector area model also have significant impacts on the quality of images produced. Upon application of the regularization term, images generated using all three factors were able to substantially decrease model mismatch error, artifacts, and noise. When the images generated by the proposed method were tested, conspicuity greatly increased, noise standard deviation decreased by 90% in homogeneous regions, and resolution also greatly improved. In conclusion, the improvement of the fidelity term to model clinical scanners is essential to generating higher quality images in low-dose imaging. PMID:25390888

  16. Ultra-High-Resolution Computed Tomography of the Lung: Image Quality of a Prototype Scanner

    PubMed Central

    Kakinuma, Ryutaro; Moriyama, Noriyuki; Muramatsu, Yukio; Gomi, Shiho; Suzuki, Masahiro; Nagasawa, Hirobumi; Kusumoto, Masahiko; Aso, Tomohiko; Muramatsu, Yoshihisa; Tsuchida, Takaaki; Tsuta, Koji; Maeshima, Akiko Miyagi; Tochigi, Naobumi; Watanabe, Shun-ichi; Sugihara, Naoki; Tsukagoshi, Shinsuke; Saito, Yasuo; Kazama, Masahiro; Ashizawa, Kazuto; Awai, Kazuo; Honda, Osamu; Ishikawa, Hiroyuki; Koizumi, Naoya; Komoto, Daisuke; Moriya, Hiroshi; Oda, Seitaro; Oshiro, Yasuji; Yanagawa, Masahiro; Tomiyama, Noriyuki; Asamura, Hisao

    2015-01-01

    Purpose The image noise and image quality of a prototype ultra-high-resolution computed tomography (U-HRCT) scanner was evaluated and compared with those of conventional high-resolution CT (C-HRCT) scanners. Materials and Methods This study was approved by the institutional review board. A U-HRCT scanner prototype with 0.25 mm x 4 rows and operating at 120 mAs was used. The C-HRCT images were obtained using a 0.5 mm x 16 or 0.5 mm x 64 detector-row CT scanner operating at 150 mAs. Images from both scanners were reconstructed at 0.1-mm intervals; the slice thickness was 0.25 mm for the U-HRCT scanner and 0.5 mm for the C-HRCT scanners. For both scanners, the display field of view was 80 mm. The image noise of each scanner was evaluated using a phantom. U-HRCT and C-HRCT images of 53 images selected from 37 lung nodules were then observed and graded using a 5-point score by 10 board-certified thoracic radiologists. The images were presented to the observers randomly and in a blinded manner. Results The image noise for U-HRCT (100.87 ± 0.51 Hounsfield units [HU]) was greater than that for C-HRCT (40.41 ± 0.52 HU; P < .0001). The image quality of U-HRCT was graded as superior to that of C-HRCT (P < .0001) for all of the following parameters that were examined: margins of subsolid and solid nodules, edges of solid components and pulmonary vessels in subsolid nodules, air bronchograms, pleural indentations, margins of pulmonary vessels, edges of bronchi, and interlobar fissures. Conclusion Despite a larger image noise, the prototype U-HRCT scanner had a significantly better image quality than the C-HRCT scanners. PMID:26352144

  17. A potential hyperspectral remote sensing imager for water quality measurements

    NASA Astrophysics Data System (ADS)

    Zur, Yoav; Braun, Ofer; Stavitsky, David; Blasberger, Avigdor

    2003-04-01

    Utilization of Pan Chromatic and Multi Spectral Remote Sensing Imagery is wide spreading and becoming an established business for commercial suppliers of such imagery like ISI and others. Some emerging technologies are being used to generate Hyper-Spectral imagery (HSI) by aircraft as well as other platforms. The commercialization of such technology for Remote Sensing from space is still questionable and depends upon several parameters including maturity, cost, market reception and many others. HSI can be used in a variety of applications in agriculture, urban mapping, geology and others. One outstanding potential usage of HSI is for water quality monitoring, a subject studied in this paper. Water quality monitoring is becoming a major area of interest in HSI due to the increase in water demand around the globe. The ability to monitor water quality in real time having both spatial and temporal resolution is one of the advantages of Remote Sensing. This ability is not limited only for measurements of oceans and inland water, but can be applied for drinking and irrigation water reservoirs as well. HSI in the UV-VNIR has the ability to measure a wide range of constituents that define water quality. Among the constituents that can be measured are the pigment concentration of various algae, chlorophyll a and c, carotenoids and phycocyanin, thus enabling to define the algal phyla. Other parameters that can be measured are TSS (Total Suspended Solids), turbidity, BOD (Biological Oxygen Demand), hydrocarbons, oxygen demand. The study specifies the properties of such a space borne device that results from the spectral signatures and the absorption bands of the constituents in question. Other parameters considered are the repetition of measurements, the spatial aspects of the sensor and the SNR of the sensor in question.

  18. Effects of sparse sampling schemes on image quality in low-dose CT

    SciTech Connect

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-11-15

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  19. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.

    PubMed

    Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-01

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications. PMID:27032676

  20. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-01

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  1. Quality of Life, Body Image and Sexual Functioning in Bariatric Surgery Patients.

    PubMed

    Sarwer, David B; Steffen, Kristine J

    2015-11-01

    This article provides an overview of the literature on quality of life, body image and sexual behaviour in individuals with extreme obesity and who undergo bariatric surgery. Quality of life is a psychosocial construct that includes multiple domains, including health-related quality of life, weight-related quality of life, as well as other psychological constructs such as body image and sexual functioning. A large literature has documented the impairments in quality of life and these other domains in persons with obesity and extreme obesity in particular. These impairments are believed to play an influential role in the decision to undergo bariatric surgery. Individuals who undergo bariatric surgery typically report significant improvements in these and other areas of psychosocial functioning, often before they reach their maximum weight loss. The durability of these changes as patients maintain or regain weight, however, is largely unknown.

  2. Quality of Life, Body Image and Sexual Functioning in Bariatric Surgery Patients.

    PubMed

    Sarwer, David B; Steffen, Kristine J

    2015-11-01

    This article provides an overview of the literature on quality of life, body image and sexual behaviour in individuals with extreme obesity and who undergo bariatric surgery. Quality of life is a psychosocial construct that includes multiple domains, including health-related quality of life, weight-related quality of life, as well as other psychological constructs such as body image and sexual functioning. A large literature has documented the impairments in quality of life and these other domains in persons with obesity and extreme obesity in particular. These impairments are believed to play an influential role in the decision to undergo bariatric surgery. Individuals who undergo bariatric surgery typically report significant improvements in these and other areas of psychosocial functioning, often before they reach their maximum weight loss. The durability of these changes as patients maintain or regain weight, however, is largely unknown. PMID:26608946

  3. Enhancement of positron emission tomography-computed tomography image quality using the principle of stochastic resonance

    PubMed Central

    Pandey, Anil Kumar; Sharma, Sanjay Kumar; Sharma, Punit; Singh, Harmandeep; Patel, Chetan; Sarkar, Kaushik; Kumar, Rakesh; Bal, Chandra Sekhar

    2014-01-01

    Purpose: Acquisition of higher counts improves visual perception of positron emission tomography-computed tomography (PET-CT) image. Larger radiopharmaceutical doses (implies more radiation dose) are administered to acquire this count in a short time period. However, diagnostic information does not increase after a certain threshold of counts. This study was conducted to develop a post processing method based on principle of “stochastic resonance” to improve visual perception of the PET-CT image having a required threshold counts. Materials and Methods: PET-CT images (JPEG file format) with low, medium, and high counts in the image were included in this study. The image was corrupted with the addition of Poisson noise. The amplitude of the Poisson noise was adjusted by dividing each pixel by a constant 1, 2, 4, 8, 16, and 32. The best amplitude of the noise that gave best images quality was selected based on high value of entropy of the output image, high value of structural similarity index and feature similarity index. Visual perception of the image was evaluated by two nuclear medicine physicians. Results: The variation in structural and feature similarity of the image was not appreciable visually, but statistically images deteriorated as the noise amplitude increases although maintaining structural (above 70%) and feature (above 80%) similarity of input images in all cases. We obtained the best image quality at noise amplitude “4” in which 88% structural and 95% feature similarity of the input images was retained. Conclusion: This method of stochastic resonance can be used to improve the visual perception of the PET-CT image. This can indirectly lead to reduction of radiation dose. PMID:25400362

  4. Hyperspectral venous image quality assessment for optimum illumination range selection based on skin tone characteristics

    PubMed Central

    2014-01-01

    Background Subcutaneous veins localization is usually performed manually by medical staff to find suitable vein to insert catheter for medication delivery or blood sample function. The rule of thumb is to find large and straight enough vein for the medication to flow inside of the selected blood vessel without any obstruction. The problem of peripheral difficult venous access arises when patient’s veins are not visible due to any reason like dark skin tone, presence of hair, high body fat or dehydrated condition, etc. Methods To enhance the visibility of veins, near infrared imaging systems is used to assist medical staff in veins localization process. Optimum illumination is crucial to obtain a better image contrast and quality, taking into consideration the limited power and space on portable imaging systems. In this work a hyperspectral image quality assessment is done to get the optimum range of illumination for venous imaging system. A database of hyperspectral images from 80 subjects has been created and subjects were divided in to four different classes on the basis of their skin tone. In this paper the results of hyper spectral image analyses are presented in function of the skin tone of patients. For each patient, four mean images were constructed by taking mean with a spectral span of 50 nm within near infrared range, i.e. 750–950 nm. Statistical quality measures were used to analyse these images. Conclusion It is concluded that the wavelength range of 800 to 850 nm serve as the optimum illumination range to get best near infrared venous image quality for each type of skin tone. PMID:25087016

  5. Analysis of imaging quality under the systematic parameters for thermal imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Jin, Weiqi

    2009-07-01

    The integration of thermal imaging system and radar system could increase the range of target identification as well as strengthen the accuracy and reliability of detection, which is a state-of-the-art and mainstream integrated system to search any invasive target and guard homeland security. When it works, there is, however, one defect existing of what the thermal imaging system would produce affected images which could cause serious consequences when searching and detecting. In this paper, we study and reveal the reason why and how the affected images would occur utilizing the principle of lightwave before establishing mathematical imaging model which could meet the course of ray transmitting. In the further analysis, we give special attentions to the systematic parameters of the model, and analyse in detail all parameters which could possibly affect the imaging process and the function how it does respectively. With comprehensive research, we obtain detailed information about the regulation of diffractive phenomena shaped by these parameters. Analytical results have been convinced through the comparison between experimental images and MATLAB simulated images, while simulated images based on the parameters we revised to judge our expectation have good comparability with images acquired in reality.

  6. The display of photographic-quality images on the Web: a comparison of two technologies.

    PubMed

    Jao, C S; Hier, D B; Brint, S U

    1999-03-01

    Downloading medical images on the Web creates certain compromises. The tradeoff is between higher resolution and faster download times. As resolution increases, download times increase. High-resolution (photographic quality) electronic images can potentially play a key role in medical education and patient care. On the Internet, images are typically formatted as Graphics Interchange Format (GIF) or the Joint Photographic Experts Group (JPEG) files. However, these formats are associated with considerable data loss in both color depth and image resolution. Furthermore, these images are available in a single resolution and have no capability of allowing the user to adjust resolution as needed. Images in the photo compact disc (PCD) format have higher resolutions than GIF or JPEG, but suffer the disadvantage of large file sizes leading to long download times on the Web. Furthermore, native web browsers are not currently able to read PCD files. The FlashPix format (FPX) offers distinct advantages over the PCD, GIF, and JPEG formats for display of high-resolution images on the Web. A Java applet can be easily downloaded for viewing FPX images. FPX images are higher resolution than JPEG and GIF images. FPX images offer rich resolutions comparable to PCD images with shorter download times. PMID:10719505

  7. Quantifying variability within water samples: the need for adequate subsampling.

    PubMed

    Donohue, Ian; Irvine, Kenneth

    2008-01-01

    Accurate and precise determination of the concentration of nutrients and other substances in waterbodies is an essential requirement for supporting effective management and legislation. Owing primarily to logistic and financial constraints, however, national and regional agencies responsible for monitoring surface waters tend to quantify chemical indicators of water quality using a single sample from each waterbody, thus largely ignoring spatial variability. We show here that total sample variability, which comprises both analytical variability and within-sample heterogeneity, of a number of important chemical indicators of water quality (chlorophyll a, total phosphorus, total nitrogen, soluble molybdate-reactive phosphorus and dissolved inorganic nitrogen) varies significantly both over time and among determinands, and can be extremely high. Within-sample heterogeneity, whose mean contribution to total sample variability ranged between 62% and 100%, was significantly higher in samples taken from rivers compared with those from lakes, and was shown to be reduced by filtration. Our results show clearly that neither a single sample, nor even two sub-samples from that sample is adequate for the reliable, and statistically robust, detection of changes in the quality of surface waters. We recommend strongly that, in situations where it is practicable to take only a single sample from a waterbody, a minimum of three sub-samples are analysed from that sample for robust quantification of both the concentrations of determinands and total sample variability. PMID:17706740

  8. Diagnostic image quality of hysterosalpingography: ionic versus non ionic water soluble iodinated contrast media

    PubMed Central

    Mohd Nor, H; Jayapragasam, KJ; Abdullah, BJJ

    2009-01-01

    Objective To compare the diagnostic image quality between three different water soluble iodinated contrast media in hysterosalpingography (HSG). Material and method In a prospective randomised study of 204 patients, the diagnostic quality of images obtained after hysterosalpingography were evaluated using Iopramide (106 patients) and Ioxaglate (98 patients). 114 patients who had undergone HSG examination using Iodamide were analysed retrospectively. Image quality was assessed by three radiologists independently based on an objective set of criteria. The obtained results were statistically analysed using Kruskal-Wallis and Mann-Whitney U test. Results Visualisation of fimbrial rugae was significantly better with Iopramide and Ioxaglate than Iodamide. All contrast media provided acceptable diagnostic image quality with regard to uterine, fallopian tubes outline and peritoneal spill. Uterine opacification was noted to be too dense in all three contrast media and not optimal for the assessment of intrauterine pathology. Higher incidence of contrast intravasation was noted in the Iodamide group. Similarly, the numbers of patients diagnosed with bilateral blocked fallopian tubes were also higher in the Iodamide group. Conclusion HSG using low osmolar contrast media (Iopramide and Ioxaglate) demonstrated diagnostic image qualities similar to HSG using conventional high osmolar contrast media (Iodamide). However, all three contrast media were found to be too dense for the detection of intrauterine pathology. Better visualisation of the fimbrial outline using Ioxaglate and Iopramide were attributed to their low contrast viscosity. The increased incidence of contrast media intravasation and bilateral tubal blockage using Iodamide are probably related to the high viscosity. PMID:21611058

  9. Image quality assessment of three cone beam CT machines using the SEDENTEXCT CT phantom

    PubMed Central

    Bamba, J; Araki, K; Endo, A; Okano, T

    2013-01-01

    Objectives: The SEDENTEXCT Project proposed quality assurance (QA) methods and introduced a QA image quality phantom. A new prototype was recently introduced that may be improved according to previous reports. The purpose of this study is to evaluate image quality in various protocols of three cone beam CT (CBCT) machines using the proposed QA phantom. Methods: Using three CBCT machines, nine image quality parameters, including image homogeneity (noise), uniformity, geometrical distortion, pixel intensity value, contrast resolution, spatial resolution [line pair (LP) chart, point spread function (PSF) and modulation transfer function (MTF)] and metal artefacts, were evaluated using a QA phantom proposed by SEDENTEXCT. Exposure parameters, slice thickness and field of view position changed variously, and the number of total protocols was 22. Results: Many protocols showed a uniform gray value distribution except in the minimum slice thickness image acquired using 3D Accuitomo 80 (Morita, Kyoto, Japan) and Veraviewepocs 3Df (Morita). Noise levels differed among the protocols. There was no geometric distortion, and the pixel intensity values were correlated with the CT value. Low contrast resolution differed among the protocols, but high contrast resolution performed well in all. Many protocols showed that the maximum line pair was larger than 1 LP mm−1 but smaller than 3 LP mm−1. PSF and MTF did not correlate well with the pixel size. The measured metal artefact areas varied for each device. Conclusions: We studied the image quality of three CBCT machines using the SEDENTEXCT phantom. Image quality varied with exposure protocols and machines. PMID:23956235

  10. Image quality associated with the use of an MR-compatible incubator in neonatal neuroimaging

    PubMed Central

    O’Regan, K; Filan, P; Pandit, N; Maher, M; Fanning, N

    2012-01-01

    Objectives MRI in the neonate poses significant challenges associated with patient transport and monitoring, and the potential for diminished image quality owing to patient motion. The objective of this study was to evaluate the usefulness of a dedicated MR-compatible incubator with integrated radiofrequency coils in improving image quality of MRI studies of the brain acquired in term and preterm neonates using standard MRI equipment. Methods Subjective and objective analyses of image quality of neonatal brain MR examinations were performed before and after the introduction of an MR-compatible incubator. For all studies, the signal-to-noise ratio (SNR) was calculated, image quality was graded (1–3) and each was assessed for image artefact (e.g. motion). Student's t-test and the Mann–Whitney U-test were used to compare mean SNR values. Results 39 patients were included [mean gestational age 39 weeks (range 30–42 weeks); mean postnatal age 13 days (range 1–56 days); mean weight 3.5 kg (range 1.4–4.5 kg)]. Following the introduction of the MR-compatible incubator, diagnostic quality scans increased from 50 to 89% and motion artefact decreased from 73 to 44% of studies. SNR did not increase initially, but, when using MR sequences and parameters specifically tailored for neonatal brain imaging, SNR increased from 70 to 213 (p=0.001). Conclusion Use of an MR-compatible incubator in neonatal neuroimaging provides a safe environment for MRI of the neonate and also facilitates patient monitoring and transport. When specifically tailored MR protocols are used, this results in improved image quality. PMID:22457402

  11. NuSTAR on-ground calibration: I. Imaging quality

    NASA Astrophysics Data System (ADS)

    Westergaard, Niels J.; Madsen, Kristin K.; Brejnholt, Nicolai F.; Koglin, Jason E.; Christensen, Finn E.; Pivovaroff, Michael J.; Vogel, Julia K.

    2012-09-01

    The Nuclear Spectroscopic Telescope Array (NuSTAR) launched in June 2012 carries the first focusing hard Xray (5 - 80 keV) telescope to orbit. The on-ground calibration was performed at the RaMCaF facility at Nevis, Columbia University. During the assembly of the telescopes, mechanical surface metrology provided surface maps of the reflecting surfaces. Several flight coated mirrors were brought to BNL for scattering measurements. The information from both sources is fed to a raytracing code that is tested against the on-ground calibration data. The code is subsequently used for predicting the imaging properties for X-ray sources at infinite distance.

  12. SU-E-I-04: A Mammography Phantom to Measure Mean Glandular Dose and Image Quality

    SciTech Connect

    Lopez-Pineda, E; Ruiz-Trejo, C; E, Brandan M

    2014-06-01

    Purpose: To evaluate mean glandular dose (MGD) and image quality in a selection of mammography systems using a novel phantom based on thermoluminescent dosemeters and the ACR wax insert. Methods: The phantom consists of two acrylic, 19 cm diameter, 4.5 cm thick, semicircular modules, used in sequence. The image quality module contains the ACR insert and is used to obtain a quality control image under automatic exposure conditions. The dosimetric module carries 15 TLD-100 chips, some under Al foils, to determine air kerma and half-value-layer. TL readings take place at our laboratory under controlled conditions. Calibration was performed using an ionization chamber and a Senographe 2000D unit for a variety of beam qualities, from 24 to 40 kV, Mo and Rh anodes and filters. Phantom MGD values agree, on the average, within 3% with ionization chamber data, and their precision is better than 10% (k=1). Results: MGD and image quality have been evaluated in a selection of mammography units currently used in Mexican health services. The sample includes analogic (screen/film), flexible digital (CR), and full-field digital image receptors. The highest MDG are associated to the CR technology. The most common image quality failure is due to artifacts (dust, intensifying screen scratches, and processor marks for film/screen, laser reader defects for CR). Conclusion: The developed phantom permits the MGD measurement without the need of a calibrated ionization chamber at the mammography site and can be used by a technician without the presence of a medical physicist. The results indicate the urgent need to establish quality control programs for mammography.

  13. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  14. Computer-aided image geometry analysis and subset selection for optimizing texture quality in photorealistic models

    NASA Astrophysics Data System (ADS)

    Sima, Aleksandra Anna; Bonaventura, Xavier; Feixas, Miquel; Sbert, Mateu; Howell, John Anthony; Viola, Ivan; Buckley, Simon John

    2013-03-01

    Photorealistic 3D models are used for visualization, interpretation and spatial measurement in many disciplines, such as cultural heritage, archaeology and geoscience. Using modern image- and laser-based 3D modelling techniques, it is normal to acquire more data than is finally used for 3D model texturing, as images may be acquired from multiple positions, with large overlap, or with different cameras and lenses. Such redundant image sets require sorting to restrict the number of images, increasing the processing efficiency and realism of models. However, selection of image subsets optimized for texturing purposes is an example of complex spatial analysis. Manual selection may be challenging and time-consuming, especially for models of rugose topography, where the user must account for occlusions and ensure coverage of all relevant model triangles. To address this, this paper presents a framework for computer-aided image geometry analysis and subset selection for optimizing texture quality in photorealistic models. The framework was created to offer algorithms for candidate image subset selection, whilst supporting refinement of subsets in an intuitive and visual manner. Automatic image sorting was implemented using algorithms originating in computer science and information theory, and variants of these were compared using multiple 3D models and covering image sets, collected for geological applications. The image subsets provided by the automatic procedures were compared to manually selected sets and their suitability for 3D model texturing was assessed. Results indicate that the automatic sorting algorithms are a promising alternative to manual methods. An algorithm based on a greedy solution to the weighted set-cover problem provided image sets closest to the quality and size of the manually selected sets. The improved automation and more reliable quality indicators make the photorealistic model creation workflow more accessible for application experts

  15. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  16. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-01-01

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  17. Resolving the Southern African Large Telescope's image quality problems

    NASA Astrophysics Data System (ADS)

    O'Donoghue, Darragh E.; Crause, Lisa A.; O'Connor, James; Strümpfer, Francois; Strydom, Ockert J.; Sass, Craig; Brink, Janus D.; Plessis, Charl du; Wiid, Eben; Love, Jonathan

    2013-08-01

    Images obtained with the Southern African Large Telescope (SALT) during its commissioning phase in 2006 showed degradation due to a large focus gradient, astigmatism, and higher order optical aberrations. An extensive forensic investigation exonerated the primary mirror and the science instruments before pointing to the mechanical interface between the telescope and the spherical aberration corrector, the complex optical subassembly which corrects the spherical aberration introduced by the 11-m primary mirror. Having diagnosed the problem, a detailed repair plan was formulated and implemented when the corrector was removed from the telescope in April 2009. The problematic interface was replaced, and the four aspheric mirrors were optically tested and re-aligned. Individual mirror surface figures were confirmed to meet specification, and a full system test after the re-alignment yielded a root mean square wavefront error of 0.15 waves. The corrector was reinstalled in August 2010 and aligned with respect to the payload and primary mirror. Subsequent on-sky tests revealed spurious signals being sent to the tracker by the auto-collimator, the instrument that maintains the alignment of the corrector with respect to the primary mirror. After rectifying this minor issue, the telescope yielded uniform 1.1 arcsec star images over the full 10-arcmin field of view.

  18. New way for both quality enhancement of THz images and detection of concealed objects

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2015-08-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern novel computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We developed new real-time algorithm, based on the correlation function, for the detection of cancelled objects by using computer processing of the passive THz images without their viewing. This algorithm allows us to make a conclusion about presence of forbidden objects on the human body. To see this object with high quality we propose one more algorithm which allows to increase the image quality. Current approach for computer processing of the THz images differs from approaches developed by us early. We apply new algorithms with success to the images captured by passive THz camera TS4 manufactured by ThruVision Inc. The distance between the camera and person is changed from 4 to 10 metres.

  19. A survey on performance status of mammography machines: image quality and dosimetry studies using a standard mammography imaging phantom.

    PubMed

    Sharma, Reena; Sharma, Sunil Dutt; Mayya, Y S

    2012-07-01

    It is essential to perform quality control (QC) tests on mammography equipment in order to produce an appropriate image quality at a lower radiation dose to patients. Imaging and dosimetric measurements on 15 mammography machines located at the busiest radiology centres of Mumbai, India were carried out using a standard CIRS breast imaging phantom in order to see the level of image quality and breast doses. The QC tests include evaluations of image quality and the mean glandular doses (MGD), which is derived from the breast entrance exposure, half-value layer (HVL), compressed breast thickness (CBT) and breast tissue compositions. At the majority of the centres, film-processing and darkroom conditions were not found to be maintained, which is required to meet the technical development specifications for the mammography film in use as recommended by the American College of Radiology (ACR). In most of the surveyed centres, the viewbox luminance and room illuminance conditions were not found to be in line with the mammography requirements recommended by the ACR. The measured HVL values of the machines were in the range of 0.27-0.39 mm aluminium (Al) with a mean value of 0.33±0.04 mm Al at 28 kV(p) following the recommendation provided by ACR. The measured MGDs were in the range of 0.14-3.80 mGy with a mean value of 1.34 mGy. The measured MGDs vary between centre to centre by a factor of 27.14. Referring to patient doses and image quality, it was observed that only one mammography centre has exceeded the recommended MGD, i.e. 3.0 mGy per view with the value of 3.80 mGy and at eight mammography centres the measured central background density (CBD) values for mammography phantom image are found to be less than the recommended CBD limit value of 1.2-2.0 optical density. PMID:22090414

  20. Holographic imaging of crowded fields: high angular resolution imaging with excellent quality at very low cost

    NASA Astrophysics Data System (ADS)

    Schödel, R.; Yelda, S.; Ghez, A.; Girard, J. H.; Labadie, L.; Rebolo, R.; Pérez-Garrido, A.; Morris, M. R.

    2013-02-01

    We present a method for speckle holography that is optimized for crowded fields. Its two key features are an iterative improvement of the instantaneous point spread functions (PSFs) extracted from each speckle frame and the (optional) simultaneous use of multiple reference stars. In this way, high signal-to-noise ratio and accuracy can be achieved on the PSF for each short exposure, which results in sensitive, high-Strehl reconstructed images. We have tested our method with different instruments, on a range of targets, and from the N[10 μm] to the I[0.9 μm] band. In terms of PSF cosmetics, stability and Strehl ratio, holographic imaging can be equal, and even superior, to the capabilities of currently available adaptive optics (AO) systems, particularly at short near-infrared to optical wavelengths. It outperforms lucky imaging because it makes use of the entire PSF and reduces the need for frame selection, thus, leading to higher Strehl and improved sensitivity. Image reconstruction a posteriori, the possibility to use multiple reference stars and the fact that these reference stars can be rather faint means that holographic imaging offers a simple way to image large, dense stellar fields near the diffraction limit of large telescopes, similar to, but much less technologically demanding than, the capabilities of a multiconjugate AO system. The method can be used with a large range of already existing imaging instruments and can also be combined with AO imaging when the corrected PSF is unstable.

  1. Possible way for increasing the quality of imaging from THz passive device

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Deng, Chao; Zhao, Yuan-meng; Zhang, Cun-lin; Zhang, Xin

    2011-11-01

    Using the passive THz imaging system developed by the CNU-THz laboratory, we capture the passive THz image of human body with forbidden objects hidden under opaque clothes. We demonstrate the possibility of significant improving the quality of the image. Our approach bases on the application of spatial filters, developed by us for computer treatment of passive THz imaging. The THz imaging system is constructed with accordance to well known passive THz imaging principles and to the THz quasi-optical theory. It contains a scanning mechanism, which has a detector approximately with 1200μm central wavelength, a data acquisition card and a microcomputer. To get a clear imaging of object we apply a sequence of the spatial filters to the image and spectral transforms of the image. The treatment of imaging from the passive THz device is made by computer code. The performance time of treatment of the image, containing about 5000 pixels, is less than 0.1 second. To illustrate the efficiency of developed approach we detect the liquid explosive, knife, pistol and metal plate hidden under opaque clothes. The results obtained demonstrate the high efficiency of our approach for the detection and recognition of the hidden objects and are very promising for the real security application.

  2. Quality of images acquired with and without grid in digital mammography.

    PubMed

    Al Khalifah, Khaled H; Brindhaban, Ajit; Saeed, Raed A

    2014-01-01

    In this study, we assessed the quality of digital mammography images acquired with a grid and without a grid for different kVp values. A digital mammography system was used for acquisition of images of the CIRS Model 015 Mammography Accreditation Phantom. The images were obtained in the presence of the grid and then with the grid removed from the system. The energy of the X-rays was varied between 26 and 32 kVp. The images were evaluated by five senior radiologic technologists with extensive experience in mammography. Statistical analysis was carried out with the Mann-Whitney non-parametric test with the level of significance set at p = 0.05. The comparison between images obtained with a grid and without a grid indicated that, for the visibility of fibers, the non-grid images at 28 kVp were significantly (p = 0.032) better than the images acquired with a grid. At all other kVp values, the images were not statistically different regarding the visibility of fibers. For the visibility of specks and masses, the images did not show any significant differences at any of the kVp values of the study. Imaging with kVp higher than 30 requires a grid to improve the visibility of fibrous calcifications and specks. For the visibility of masses at 32 kVp, no statistically significant differences between the grid and non-grid images were found.

  3. Radiation Dose Reduction Methods For Use With Fluoroscopic Imaging, Computers And Implications For Image Quality

    NASA Astrophysics Data System (ADS)

    Edmonds, E. W.; Hynes, D. M.; Rowlands, J. A.; Toth, B. D.; Porter, A. J.

    1988-06-01

    The use of a beam splitting device for medical gastro-intestinal fluoroscopy has demonstrated that clinical images obtained with a 100mm photofluorographic camera, and a 1024 X 1024 digital matrix with pulsed progressive readout acquisition techniques, are identical. In addition, it has been found that clinical images can be obtained with digital systems at dose levels lower than those possible with film. The use of pulsed fluoroscopy with intermittent storage of the fluoroscopic image has also been demonstrated to reduce the fluoroscopy part of the examination to very low dose levels, particularly when low repetition rates of about 2 frames per second (fps) are used. The use of digital methods reduces the amount of radiation required and also the heat generated by the x-ray tube. Images can therefore be produced using a very small focal spot on the x-ray tube, which can produce further improvement in the resolution of the clinical images.

  4. Closed-loop adaptive optics using a CMOS image quality metric sensor

    NASA Astrophysics Data System (ADS)

    Ting, Chueh; Rayankula, Aditya; Giles, Michael K.; Furth, Paul M.

    2006-08-01

    When compared to a Shack-Hartmann sensor, a CMOS image sharpness sensor has the advantage of reduced complexity in a closed-loop adaptive optics system. It also has the potential to be implemented as a smart sensor using VLSI technology. In this paper, we present a novel adaptive optics testbed that uses a CMOS sharpness imager built in the New Mexico State University (NMSU) Electro-Optics Research Laboratory (EORL). The adaptive optics testbed, which includes a CMOS image quality metric sensor and a 37-channel deformable mirror, has the capability to rapidly compensate higher-order phase aberrations. An experimental performance comparison of the pinhole image sharpness feedback method and the CMOS imager is presented. The experimental data shows that the CMOS sharpness imager works well in a closed-loop adaptive optics system. Its overall performance is better than that of the pinhole method, and it has a fast response time.

  5. An enhancement algorithm for low quality fingerprint image based on edge filter and Gabor filter

    NASA Astrophysics Data System (ADS)

    Xue, Jun-tao; Liu, Jie; Liu, Zheng-guang

    2009-07-01

    On account of restriction of man-made and collection environment, the fingerprint image generally has low quality, especially a contaminated background. In this paper, an enhancement algorithm based on edge filter and Gabor filter is proposed to solve this kind of fingerprint image. Firstly, a gray-based algorithm is used to enhance the edge and segment the image. Then, a multilevel block size method is used to extract the orientation field from segmented fingerprint image. Finally, Gabor filter is used to fulfill the enhancement of the fingerprint image. The experiment results show that the proposed enhancement algorithm is effective than the normal Gabor filter algorithm. The fingerprint image enhance by our algorithm has better enhancement effect, so it is helpful for the subsequent research, such as classification, feature exaction and identification.

  6. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  7. An image quality comparison of standard and dual-side read CR systems for pediatric radiology

    SciTech Connect

    Monnin, P.; Holzer, Z.; Wolf, R.; Neitzel, U.; Vock, P.; Gudinchet, F.; Verdun, F.R.

    2006-02-15

    An objective analysis of image quality parameters was performed for a computed radiography (CR) system using both standard single-side and prototype dual-side read plates. The pre-sampled modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) for the systems were determined at three different beam qualities representative of pediatric chest radiography, at an entrance detector air kerma of 5 {mu}Gy. The NPS and DQE measurements were realized under clinically relevant x-ray spectra for pediatric radiology, including x-ray scatter radiations. Compared to the standard single-side read system, the MTF for the dual-side read system is reduced, but this is offset by a significant decrease in image noise, resulting in a marked increase in DQE (+40%) in the low spatial frequency range. Thus, for the same image quality, the new technology permits the CR system to be used at a reduced dose level.

  8. SU-E-J-09: Image Quality Comparison and Dose Quantification for 2.5 MV

    SciTech Connect

    Stowe, M; DiCostanzo, D; Ayan, A; Woollard, J; Gupta, N

    2015-06-15

    Purpose: To compare the image quality of the 2.5MV imaging beam (2.5X-IMB) to that of a 6MV beam and to quantify the imaging dose of a 2.5X-IMB for constancy as specified by AAPM TG-142 Methods: The image quality of the 2.5X-IMB was compared to the 6MV imaging beam using the SNC ImagePro MV-QA phantom and the Varian supplied Las Vegas phantom (LVP). High resolution (1280×1280×16, 2 frames at 1.5MU/frame) and low resolution (640×640×16, 2 frames at 0.75MU/frame) images were compared for each phantom. MV-QA phantom images were evaluated quantitatively, and the LVP images were evaluated qualitatively. The imaging dose for 2.5X-IMB was quantified using the procedure outlined in TG51. PTWCC13-31013 chambers were used to measure a percent depth dose (PDD) curve for the 2.5X-IMB. All the factors described in TG51 were calculated using the 2.5X-IMB and a PTW30013 farmer chamber. Results: A comparison between 2.5X-IMB and 6MV image quality was performed both visually and with DoseLab software. The optimal window and level were set for each image of the LVP by the user. Visual inspection showed greater contrast resolution with the 2.5MV beam, but no significant difference with the change in imaging resolution. DoseLab reported similar spatial resolutions between the two energies, but the contrast-to-noise ratio (CNR) was greater for 2.5MV. The PDDx(10cm) for a 10x10cm2 field was measured to be 51.5%. Although this PDD value is off the scale of Figure 4 in TG51, the trend of the curve corresponding to the PTW31003 (equivalent) chamber led to an approximate kQ value of 1.00. Conclusion: When compared to 6MV imaging, 2.5X-IMB results in a better CNR. At low resolution, the DoseLab results for the two energies are comparable, but visual analysis favors the 2.5X-IMB images. Imaging dose was quantified for the 2.5X-IMB after following the TG51 methodology with appropriate approximations.

  9. DTIPrep: quality control of diffusion-weighted images

    PubMed Central

    Oguz, Ipek; Farzinfar, Mahshid; Matsui, Joy; Budin, Francois; Liu, Zhexing; Gerig, Guido; Johnson, Hans J.; Styner, Martin

    2014-01-01

    In the last decade, diffusion MRI (dMRI) studies of the human and animal brain have been used to investigate a multitude of pathologies and drug-related effects in neuroscience research. Study after study identifies white matter (WM) degeneration as a crucial biomarker for all these diseases. The tool of choice for studying WM is dMRI. However, dMRI has inherently low signal-to-noise ratio and its acquisition requires a relatively long scan time; in fact, the high loads required occasionally stress scanner hardware past the point of physical failure. As a result, many types of artifacts implicate the quality of diffusion imagery. Using these complex scans containing artifacts without quality control (QC) can result in considerable error and bias in the subsequent analysis, negatively affecting the results of research studies using them. However, dMRI QC remains an under-recognized issue in the dMRI community as there are no user-friendly tools commonly available to comprehensively address the issue of dMRI QC. As a result, current dMRI studies often perform a poor job at dMRI QC. Thorough QC of dMRI will reduce measurement noise and improve reproducibility, and sensitivity in neuroimaging studies; this will allow researchers to more fully exploit the power of the dMRI technique and will ultimately advance neuroscience. Therefore, in this manuscript, we present our open-source software, DTIPrep, as a unified, user friendly platform for thorough QC of dMRI data. These include artifacts caused by eddy-currents, head motion, bed vibration and pulsation, venetian blind artifacts, as well as slice-wise and gradient-wise intensity inconsistencies. This paper summarizes a basic set of features of DTIPrep described earlier and focuses on newly added capabilities related to directional artifacts and bias analysis. PMID:24523693

  10. DTIPrep: quality control of diffusion-weighted images.

    PubMed

    Oguz, Ipek; Farzinfar, Mahshid; Matsui, Joy; Budin, Francois; Liu, Zhexing; Gerig, Guido; Johnson, Hans J; Styner, Martin

    2014-01-01

    In the last decade, diffusion MRI (dMRI) studies of the human and animal brain have been used to investigate a multitude of pathologies and drug-related effects in neuroscience research. Study after study identifies white matter (WM) degeneration as a crucial biomarker for all these diseases. The tool of choice for studying WM is dMRI. However, dMRI has inherently low signal-to-noise ratio and its acquisition requires a relatively long scan time; in fact, the high loads required occasionally stress scanner hardware past the point of physical failure. As a result, many types of artifacts implicate the quality of diffusion imagery. Using these complex scans containing artifacts without quality control (QC) can result in considerable error and bias in the subsequent analysis, negatively affecting the results of research studies using them. However, dMRI QC remains an under-recognized issue in the dMRI community as there are no user-friendly tools commonly available to comprehensively address the issue of dMRI QC. As a result, current dMRI studies often perform a poor job at dMRI QC. Thorough QC of dMRI will reduce measurement noise and improve reproducibility, and sensitivity in neuroimaging studies; this will allow researchers to more fully exploit the power of the dMRI technique and will ultimately advance neuroscience. Therefore, in this manuscript, we present our open-source software, DTIPrep, as a unified, user friendly platform for thorough QC of dMRI data. These include artifacts caused by eddy-currents, head motion, bed vibration and pulsation, venetian blind artifacts, as well as slice-wise and gradient-wise intensity inconsistencies. This paper summarizes a basic set of features of DTIPrep described earlier and focuses on newly added capabilities related to directional artifacts and bias analysis. PMID:24523693

  11. A Comparison of Image Quality Evaluation Techniques for Transmission X-Ray Microscopy

    SciTech Connect

    Bolgert, Peter J; /Marquette U. /SLAC

    2012-08-31

    Beamline 6-2c at Stanford Synchrotron Radiation Lightsource (SSRL) is capable of Transmission X-ray Microscopy (TXM) at 30 nm resolution. Raw images from the microscope must undergo extensive image processing before publication. Since typical data sets normally contain thousands of images, it is necessary to automate the image processing workflow as much as possible, particularly for the aligning and averaging of similar images. Currently we align images using the 'phase correlation' algorithm, which calculates the relative offset of two images by multiplying them in the frequency domain. For images containing high frequency noise, this algorithm will align noise with noise, resulting in a blurry average. To remedy this we multiply the images by a Gaussian function in the frequency domain, so that the algorithm ignores the high frequency noise while properly aligning the features of interest (FOI). The shape of the Gaussian is manually tuned by the user until the resulting average image is sharpest. To automatically optimize this process, it is necessary for the computer to evaluate the quality of the average image by quantifying its sharpness. In our research we explored two image sharpness metrics, the variance method and the frequency threshold method. The variance method uses the variance of the image as an indicator of sharpness while the frequency threshold method sums up the power in a specific frequency band. These metrics were tested on a variety of test images, containing both real and artificial noise. To apply these sharpness metrics, we designed and built a MATLAB graphical user interface (GUI) called 'Blur Master.' We found that it is possible for blurry images to have a large variance if they contain high amounts of noise. On the other hand, we found the frequency method to be quite reliable, although it is necessary to manually choose suitable limits for the frequency band. Further research must be performed to design an algorithm which

  12. Effect of using different cover image quality to obtain robust selective embedding in steganography

    NASA Astrophysics Data System (ADS)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  13. Optimization of image quality and patient dose in radiographs of paediatric extremities using direct digital radiography

    PubMed Central

    Ansell, C; Jerrom, C; Honey, I D

    2015-01-01

    Objective: The purpose of this study was to evaluate the effect of beam quality on the image quality (IQ) of ankle radiographs of paediatric patients in the age range of 0–1 year whilst maintaining constant effective dose (ED). Methods: Lateral ankle radiographs of an infant foot phantom were taken at a range of tube potentials (40.0–64.5 kVp) with and without 0.1-mm copper (Cu) filtration using a Trixell Pixium 4600 detector (Trixell, Morains, France). ED to the patient was computed for the default exposure parameters using PCXMC v. 2.0 and was fixed for other beam qualities by modulating the tube current-time product. The contrast-to-noise ratio (CNR) was measured between the tibia and adjacent soft tissue. The IQ of the phantom images was assessed by three radiologists and a reporting radiographer. Four IQ criteria were defined each with a scale of 1–3, giving a maximum score of 12. Finally, a service audit of clinical images at the default and optimum beam qualities was undertaken. Results: The measured CNR for the 40 kVp/no Cu image was 12.0 compared with 7.6 for the default mode (55  0.1 mm Cu). An improvement in the clinical IQ scores was also apparent at this lower beam quality. Conclusion: Lowering tube potential and removing filtration improved the clinical IQ of paediatric ankle radiographs in this age range. Advances in knowledge: There are currently no UK guidelines on exposure protocols for paediatric imaging using direct digital radiography. A lower beam quality will produce better IQ with no additional dose penalty for infant extremity imaging. PMID:25816115

  14. Dose sensitivity of three methods of image quality assessment in digital mammography

    NASA Astrophysics Data System (ADS)

    Hummel, Johann; Kaar, Marcus; Hoffmann, Rainer; Kaldarar, Heinrich; Semturs, Friedrich; Homolka, Peter; Figl, Michael

    2012-03-01

    Image quality assurance is one of the key issues in breast screening protocols. Although image quality can always be improved by increasing dose this mechanism is restricted by limiting values given by the standards. Therefore, it is crucial for system adjustment to describe the dependency of the image quality parameters on small changes in dose. This dose sensitivity was tested for three image quality evaluation methods. The European protocol requires the use of the CDMAM phantom which is a conventional contrast-detail phantom, while in North America the American College of Radiology (ACR) accreditation phantom is proposed. In contrast to these visual test methods the German PAS 1054 phantom uses digital image processing to derive image quality parameters like the noise-equivalent number of quanta (NEQ). We varied the dose within the range of clinical use. For the ACR phantom the examined parameter was the number of detected objects. With the CDMAM phantom we chose the diameters 0,10, 0.13, 0.20, 0.31 and 0.5 mm and recorded the threshold thicknesses. With respect to the PAS 1054 measurements we evaluated the NEQ at typical spatial frequencies to calculate the relative changes. NEQ versus dose increment shows a linear relationship and can be described by a linear function (R = .998). Every current-time product increment can be detected. With the ACR phantom the number of detected objects increases only in the lower dose range and reaches saturation at about 100mAs. The CDMAM can detect a 50% increase in dose confidently although the parameter increase is not monotonous. We conclude that an NEQ based method can be used as a simple and highly sensitive procedure for weekly quality assurance.

  15. Adapting the ISO 20462 softcopy ruler method for online image quality studies

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Phillips, Jonathan B.; Williams, Don

    2013-01-01

    In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.

  16. Evaluating the quality of images produced by soft X-ray units.

    PubMed

    Bradley, D A; Wong, C S; Ng, K H

    2000-01-01

    For broad-beam soft X-ray sources, assessment of the quality of image produced by such units is made complex by the low penetration capabilities of the radiation. In the present study we have tested the utility of several types of test tool, some of which have been fabricated by us, as part of an effort to evaluate several key image defining parameters. These include the film characteristic, focal-spot size, image resolution and detail detectability. The two sources of X-rays used in present studies were the University of Malaya flash X-ray device (UMFX1) and a more conventional soft X-ray tube (Softex, Tokyo), the latter operating at peak accelerating potentials of 20 kVp. We have established, for thin objects, that both systems produce images of comparable quality and, in particular, objects can be resolved down to better than 45 microm. PMID:11003508

  17. Influence of x-ray pulse parameters on the image quality for moving objects in digital cardiac imaging.

    PubMed

    Guibelalde, Eduardo; Vano, Eliseo; Vaquero, Francisco; González, Luciano

    2004-10-01

    The image quality of a single frame in a modern cardiac imaging x-ray facility can be improved by adjusting the automatic pulse exposure parameters. The effects of acquisition rate on patient dose and the detectability of moving objects have been fully described in scientific literature. However, the influence of automatic pulse exposure parameters is still to be determined. Images of a moving wheel (with lead wires) were acquired using an H5000 Philips Integris cardiac x-ray system. Poly(methylmethacrylate) plastic samples 20 and 30 cm thick were employed as the build-up phantom to simulate a patient. The images were obtained using preset clinical parameters for cardiac imaging procedures. The signal detectability and motion blur of a contrast bar at a transversal speed in the range of 100-150 mm/s were evaluated with a cine pulse width of 3, 5, 7, and 10 ms under automatic mA kV regulation. Two levels of exposure at the image intensifier entrance were included in this study. Signal detectability was analyzed in terms of the signal-to-noise ratio (SNR) and the value of SNR2/entrance surface dose. The blurring was modeled as a Gaussian-shaped blurring function, and the motion blur was expressed in terms of the peak full width at half maximum and amplitude (apparent contrast) of the resolution functions. A contrast bar simulating a vessel in motion at the maximum velocities of typical cardiac structures was exposed. Severe loss of image quality occurred at pulse widths > or =7 ms. It is also shown that below 5 ms static nonlinearities, likely caused by the need to use a large focus for cine acquisition, dominate the blurring process.

  18. Influence of x-ray pulse parameters on the image quality for moving objects in digital cardiac imaging

    SciTech Connect

    Guibelalde, Eduardo; Vano, Eliseo; Vaquero, Francisco; Gonzalez, Luciano

    2004-10-01

    The image quality of a single frame in a modern cardiac imaging x-ray facility can be improved by adjusting the automatic pulse exposure parameters. The effects of acquisition rate on patient dose and the detectability of moving objects have been fully described in scientific literature. However, the influence of automatic pulse exposure parameters is still to be determined. Images of a moving wheel (with lead wires) were acquired using an H5000 Philips Integris cardiac x-ray system. Poly(methylmethacrylate) plastic samples 20 and 30 cm thick were employed as the build-up phantom to simulate a patient. The images were obtained using preset clinical parameters for cardiac imaging procedures. The signal detectability and motion blur of a contrast bar at a transversal speed in the range of 100-150 mm/s were evaluated with a cine pulse width of 3, 5, 7, and 10 ms under automatic mA kV regulation. Two levels of exposure at the image intensifier entrance were included in this study. Signal detectability was analyzed in terms of the signal-to-noise ratio (SNR) and the value of SNR{sup 2}/entrance surface dose. The blurring was modeled as a Gaussian-shaped blurring function, and the motion blur was expressed in terms of the peak full width at half maximum and amplitude (apparent contrast) of the resolution functions. A contrast bar simulating a vessel in motion at the maximum velocities of typical cardiac structures was exposed. Severe loss of image quality occurred at pulse widths {>=}7 ms. It is also shown that below 5 ms static nonlinearities, likely caused by the need to use a large focus for cine acquisition, dominate the blurring process.

  19. Coronary computed tomography angiography using ultra-low-dose contrast media: radiation dose and image quality.

    PubMed

    Komatsu, Sei; Kamata, Teruaki; Imai, Atsuko; Ohara, Tomoki; Takewa, Mitsuhiko; Ohe, Ryoko; Miyaji, Kazuaki; Yoshida, Junichi; Kodama, Kazuhisa

    2013-08-01

    To analyze the invasiveness and image quality of coronary CT angiography (CCTA) with 80 kV. We enrolled 181 patients with low body weight and low calcium level. Of these, 154 patients were randomly assigned to 1 of 3 groups: 280 HU/80 kV (n = 51); 350 HU/80 kV (n = 51); or 350 HU/120 kV (n = 52). The amount of contrast media (CM) was decided with a CT number-controlling system. Twenty-seven patients were excluded because of an invalid time density curve by timing bolus. The predicted amount of CM, volume CT dose index, dose-length product, effective dose, image noise, and 5-point image quality were measured. The amounts of CM for the 80 kV/280 HU, 80 kV/350 HU, and 120 kV/350 HU groups were 10 ± 4 mL, 15 ± 7 mL, and 30 ± 6 mL, respectively. Although image noise was greater at 80 than 120 kV, there was no significant difference in image quality between 80 kV/350 HU and 120 kV/350 HU (p = 0.390). There was no significant difference in image quality between 80 kV/280 HU and 80 kV/350 HU (4.4 ± 0.7 vs. 4.7 ± 0.4, p = 0.056). The amount of CM and effective dose was lower for 80 kV CCTA than for 120 kV CCTA. CCTA at 80 kV/280 HU may decrease the amount of CM and radiation dose necessary while maintaining image quality.

  20. Iterative Reconstruction Improves Both Objective and Subjective Image Quality in Acute Stroke CTP

    PubMed Central

    Flottmann, Fabian; Kabath, Jan; Illies, Till; Schneider, Tanja; Buhk, Jan-Hendrik; Fiehler, Jens; Kemmling, André

    2016-01-01

    Purpose Computed tomography perfusion (CTP) imaging in acute ischemic stroke (AIS) suffers from measurement errors due to image noise. The purpose of this study was to investigate if iterative reconstruction (IR) algorithms can be used to improve the diagnostic value of standard-dose CTP in AIS. Methods Twenty-three patients with AIS underwent CTP with standardized protocol and dose. Raw data were reconstructed with filtered back projection (FBP) and IR with intensity levels 3, 4, 5. Image quality was objectively (quantitative perfusion values, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR)) and subjectively (overall image quality) assessed. Ischemic core and perfusion mismatch were visually rated. Discriminative power for tissue outcome prediction was determined by the area under the receiver operating characteristic curve (AUC) resulting from the overlap between follow-up infarct lesions and stepwise thresholded CTP maps. Results With increasing levels of IR, objective image quality (SNR and CNR in white matter and gray matter, elimination of error voxels) and subjective image quality improved. Using IR, mean transit time (MTT) was higher in ischemic lesions, while there was no significant change of cerebral blood volume (CBV) and cerebral blood flow (CBF). Visual assessments of perfusion mismatch changed in 4 patients, while the ischemic core remained constant in all cases. Discriminative power for infarct prediction as represented by AUC was not significantly changed in CBV, but increased in CBF and MTT (mean (95% CI)): 0.72 (0.67–0.76) vs. 0.74 (0.70–0.78) and 0.65 (0.62–0.67) vs 0.67 (0.64–0.70). Conclusion In acute stroke patients, IR improves objective and subjective image quality when applied to standard-dose CTP. This adds to the overall confidence of CTP in acute stroke triage. PMID:26930290

  1. Visualizing artifacts, meta-information, and quality parameters of image sequences

    NASA Astrophysics Data System (ADS)

    Uray, Peter; Mueller-Seelich, Heimo; Plaschzug, Walter; Haas, Werner

    1998-05-01

    This paper presents visualization methods for film quality parameters which are used in the course of semi-automatic film resaturation. A central part is navigation in the time context by visualizing the temporal film structure. So-called 'time sections' take characteristic features (e.g. one pixel line or one column, motion information) from each image and map them to a column of a time section image. Typical dimensions for a time section image for a 100 minute movie are 500 by 150,000 pixels, where each image of the original sequence is represented by one column (500 by 1) of the time section image. As the width of such an image is too large for displaying it in one piece on a computer monitor, a non-linear time scale is introduced. This allows for displaying the content of an interesting shot in full detail while other shot are shown in a compressed view. The time line of a time section can be regarded as an array of 'temporal hyperlinks' modeling the temporal structure of a movie. The smallest temporal entity of annotation is given by shots (a continuous sequence of images) which can be combined hierarchically to scenes, acts, etc. or grouped by certain characteristics (e.g. artefact class). In addition, special quality parameters can be assigned to temporal entities such as shots, scenes and groups. These parameters can be visualized by icons that indicate quality on the non-linear timeline. Application examples for quality icons of each defect class are given, and the visual quality representation used for the restoration of a full length movie is presented.

  2. Machine vision image quality measurement in cardiac x-ray imaging

    NASA Astrophysics Data System (ADS)

    Kengyelics, Stephen M.; Gislason-Lee, Amber; Keeble, Claire; Magee, Derek; Davies, Andrew G.

    2015-03-01

    The purpose of this work is to report on a machine vision approach for the automated measurement of x-ray image contrast of coronary arteries filled with iodine contrast media during interventional cardiac procedures. A machine vision algorithm was developed that creates a binary mask of the principal vessels of the coronary artery tree by thresholding a standard deviation map of the direction image of the cardiac scene derived using a Frangi filter. Using the mask, average contrast is calculated by fitting a Gaussian model to the greyscale profile orthogonal to the vessel centre line at a number of points along the vessel. The algorithm was applied to sections of single image frames from 30 left and 30 right coronary artery image sequences from different patients. Manual measurements of average contrast were also performed on the same images. A Bland-Altman analysis indicates good agreement between the two methods with 95% confidence intervals -0.046 to +0.048 with a mean bias of 0.001. The machine vision algorithm has the potential of providing real-time context sensitive information so that radiographic imaging control parameters could be adjusted on the basis of clinically relevant image content.

  3. The effect of defect cluster size and interpolation on radiographic image quality

    NASA Astrophysics Data System (ADS)

    Töpfer, Karin; Yip, Kwok L.

    2011-03-01

    For digital X-ray detectors, the need to control factory yield and cost invariably leads to the presence of some defective pixels. Recently, a standard procedure was developed to identify such pixels for industrial applications. However, no quality standards exist in medical or industrial imaging regarding the maximum allowable number and size of detector defects. While the answer may be application specific, the minimum requirement for any defect specification is that the diagnostic quality of the images be maintained. A more stringent criterion is to keep any changes in the images due to defects below the visual threshold. Two highly sensitive image simulation and evaluation methods were employed to specify the fraction of allowable defects as a function of defect cluster size in general radiography. First, the most critical situation of the defect being located in the center of the disease feature was explored using image simulation tools and a previously verified human observer model, incorporating a channelized Hotelling observer. Detectability index d' was obtained as a function of defect cluster size for three different disease features on clinical lung and extremity backgrounds. Second, four concentrations of defects of four different sizes were added to clinical images with subtle disease features and then interpolated. Twenty observers evaluated the images against the original on a single display using a 2-AFC method, which was highly sensitive to small changes in image detail. Based on a 50% just-noticeable difference, the fraction of allowed defects was specified vs. cluster size.

  4. Elliptical Local Vessel Density: a Fast and Robust Quality Metric for Fundus Images

    SciTech Connect

    Giancardo, Luca; Chaum, Edward; Karnowski, Thomas Paul; Meriaudeau, Fabrice; Tobin Jr, Kenneth William; Abramoff, M.D.

    2008-01-01

    A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches to the problem of automatically judging the image quality. We propose a new set of features independent of Field of View or resolution to describe the morphology of the patient's vessels. Our initial results suggest that they can be used to estimate the image quality in a time one order of magnitude shorter respect to previous techniques.

  5. Image quality evaluation of LCDs based on novel RGBW sub-pixel structure

    NASA Astrophysics Data System (ADS)

    Kim, Sungjin; Kang, Dongwoo; Lee, Jinsang; Kim, Jaekyeom; Park, Yongmin; Han, Taeseong; Jung, Sooyeon; Yoo, Jang Jin; Lim, Moojong; Baek, Jongsang

    2015-01-01

    Many display manufacturers have studied RGBW pixel structure adding a white sub-pixel to RGB LCD and recently revealed UHD TVs based on novel RGBW LCD. The RGBW LCD has 50% higher white luminance and 25% lower primary color luminance compared to RGB LCD. In this paper, the image quality of RGBW and RGB LCD was dealt with. Before evaluating them, TV broadcast video and IEC-62087 video were analyzed for test video clips. In order to analyze them, a TV reference video from TV broadcast content in Korea was firstly collected. As a result of TV reference video analysis, RGBW LCD was expected to improve image quality more because most of colors are distributed around white point and population ratio of achromatic colors is higher. RGB, RGBW and RGBW using wide color gamut (WCG) backlight unit (BLU) LCDs were prepared, and a series of visual assessments were conducted. As a result, RGBW LCD obtained higher scores than RGB LCD about four attributes (`Brightness', `Naturalness', `Contrast' and overall image quality) and `Colorfulness' was not higher score than RGB LCD in test still images. RGBW LCD's overall image quality in the TV reference video clips also was assessed higher than RGB LCD. Additionally, RGBW LCD using WCG BLU shows better performance about especially `Colorfulness' than RGBW LCD.

  6. VSI: a visual saliency-induced index for perceptual image quality assessment.

    PubMed

    Zhang, Lin; Shen, Ying; Li, Hongyu

    2014-10-01

    Perceptual image quality assessment (IQA) aims to use computational models to measure the image quality in consistent with subjective evaluations. Visual saliency (VS) has been widely studied by psychologists, neurobiologists, and computer scientists during the last decade to investigate, which areas of an image will attract the most attention of the human visual system. Intuitively, VS is closely related to IQA in that suprathreshold distortions can largely affect VS maps of images. With this consideration, we propose a simple but very effective full reference IQA method using VS. In our proposed IQA model, the role of VS is twofold. First, VS is used as a feature when computing the local quality map of the distorted image. Second, when pooling the quality score, VS is employed as a weighting function to reflect the importance of a local region. The proposed IQA index is called visual saliency-based index (VSI). Several prominent computational VS models have been investigated in the context of IQA and the best one is chosen for VSI. Extensive experiments performed on four large-scale benchmark databases demonstrate that the proposed IQA index VSI works better in terms of the prediction accuracy than all state-of-the-art IQA indices we can find while maintaining a moderate computational complexity. The MATLAB source code of VSI and the evaluation results are publicly available online at http://sse.tongji.edu.cn/linzhang/IQA/VSI/VSI.htm. PMID:25122572

  7. Tube focal spot size and power capability impact image quality in the evaluation of intracoronary stents

    NASA Astrophysics Data System (ADS)

    Cesmeli, Erdogan; Berry, Joel L.; Carr, J. J.

    2005-04-01

    Proliferation of coronary stent deployment for treatment of coronary heart disease (CHD) creates a need for imaging-based follow-up examinations to assess patency. Technological improvements in multi-detector computer tomography (MDCT) make it a potential non-invasive alternative to coronary catheterization for evaluation of stent patency; however, image quality with MDCT varies based on the size and composition of the stent. We studied the role of tube focal spot size and power in the optimization of image quality in a stationary phantom. A standard uniform physical phantom with a tubular insert was used where coronary stents (4 mm in diameter) were deployed in a tube filled with contrast to simulate a typical imaging condition observed in clinical practice. We utilized different commercially available stents and scanned them with different tube voltage and current settings (LightSpeed Pro16, GE Healthcare Technologies, Waukesha, WI, USA). The scanner used different focal spot size depending on the power load and thus allowed us to assess the combined effect of the focal spot size and the power. A radiologist evaluated the resulting images in terms of image quality and artifacts. For all stents, we found that the small focal spot size yielded better image quality and reduced artifacts. In general, higher power capability for the given focal spot size improved the signal-to-noise ratio in the images allowing improved assessment. Our preliminary study in a non-moving phantom suggests that a CT scanner that can deliver the same power on a small focal spot size is better suited to have an optimized scan protocol for reliable stent assessment.

  8. Calibration and validation by professional observers of the Mission-Quality criterion for imaging systems design.

    PubMed

    Kattnig, Alain P; Primot, Jérôme

    2008-03-31

    Imaging systems comparisons remains today a sensitive subject because of the difficulty to merge radiometric and spatial dimensions into a single, easy to use, parameter. By leaning explicitly on professional image users and their requirements we show how to build such a criterion, called Mission-Quality. A specific observation campaign is described and its results are used to calibrate and carry first proof of the criterion adequacy.

  9. A review of image quality assessment methods with application to computational photography

    NASA Astrophysics Data System (ADS)

    Maître, Henri

    2015-12-01

    Image quality assessment has been of major importance for several domains of the industry of image as for instance restoration or communication and coding. New application fields are opening today with the increase of embedded power in the camera and the emergence of computational photography: automatic tuning, image selection, image fusion, image data-base building, etc. We review the literature of image quality evaluation. We pay attention to the very different underlying hypotheses and results of the existing methods to approach the problem. We explain why they differ and for which applications they may be beneficial. We also underline their limits, especially for a possible use in the novel domain of computational photography. Being developed to address different objectives, they propose answers on different aspects, which make them sometimes complementary. However, they all remain limited in their capability to challenge the human expert, the said or unsaid ultimate goal. We consider the methods which are based on retrieving the parameters of a signal, mostly in spectral analysis; then we explore the more global methods to qualify the image quality in terms of noticeable defects or degradation as popular in the compression domain; in a third field the image acquisition process is considered as a channel between the source and the receiver, allowing to use the tools of the information theory and to qualify the system in terms of entropy and information capacity. However, these different approaches hardly attack the most difficult part of the task which is to measure the quality of the photography in terms of aesthetic properties. To help in addressing this problem, in between Philosophy, Biology and Psychology, we propose a brief review of the literature which addresses the problematic of qualifying Beauty, present the attempts to adapt these concepts to visual patterns and initiate a reflection on what could be done in the field of photography.

  10. Study on the Quality Control Methods of Cluster-Based Remote Sensing Image Processing

    NASA Astrophysics Data System (ADS)

    Xia, L.; Zhao, L. B.; Zhang, X. P.; Zhou, X. M.

    2013-05-01

    Along with the advances in technology, modern surveying technology developed rapidly, a series of high-tech equipments have been invited and developed. A lot of cluster-based Remote Sensing image processing systems, at home or abroad, have been produced and used in production, such as PixelFactory, PixelGrid, CIPS etc. Been popularized and used, processing technology and procedure have changed to be different from the old pattern, and traditional ways of quality control no longer meet the need of new technology. Above all, it is of vital importance for us to find new ways of procedure quality control for the geographical spatial data quality certification under new technology system. Based on the mainstream image fast processing systems - PixelFactory and PixelGrid, the nodes and methods of quality control during image processing will be studied in this paper, including image preprocessing, AT, DSM, DEM, ortho-rectification, mosaic etc. After that, the feasibility of quality control ways proposed will be verified by actual production project. All the advices can be used as references for large-scale application of new technology in information surveying system.

  11. Development of an image processing system in splendid squid quality classification

    NASA Astrophysics Data System (ADS)

    Masunee, Niyada; Chaiprapat, Supapan; Waiyagan, Kriangkrai

    2013-07-01

    Agricultural products typically exhibit high variance in quality characteristics. To assure customer satisfaction and control manufacturing productivity, quality classification is necessary to screen off defective items and to grade the products. This article presents an application of image processing techniques on squid grading and defect discrimination. A preliminary study indicated that surface color was an efficient determinant to justify quality of splendid squids. In this study, a computer vision system (CVS) was developed to examine the characteristics of splendid squids. Using image processing techniques, squids could be classified into three different quality grades as in accordance with an industry standard. The developed system first sifted through squid images to reject ones with black marks. Qualified squids were graded on a proportion of white, pink, and red regions appearing on their bodies by using fuzzy logic. The system was evaluated on 100 images of squids at different quality levels. It was found that accuracy obtained by the proposed technique was 95% compared with sensory evaluation of an expert.

  12. Applications of hyperspectral imaging in chicken meat safety and quality detection and evaluation: a review.

    PubMed

    Xiong, Zhenjie; Xie, Anguo; Sun, Da-Wen; Zeng, Xin-An; Liu, Dan

    2015-01-01

    Currently, the issue of food safety and quality is a great public concern. In order to satisfy the demands of consumers and obtain superior food qualities, non-destructive and fast methods are required for quality evaluation. As one of these methods, hyperspectral imaging (HSI) technique has emerged as a smart and promising analytical tool for quality evaluation purposes and has attracted much interest in non-destructive analysis of different food products. With the main advantage of combining both spectroscopy technique and imaging technique, HSI technique shows a convinced attitude to detect and evaluate chicken meat quality objectively. Moreover, developing a quality evaluation system based on HSI technology would bring economic benefits to the chicken meat industry. Therefore, in recent years, many studies have been conducted on using HSI technology for the safety and quality detection and evaluation of chicken meat. The aim of this review is thus to give a detailed overview about HSI and focus on the recently developed methods exerted in HSI technology developed for microbiological spoilage detection and quality classification of chicken meat. Moreover, the usefulness of HSI technique for detecting fecal contamination and bone fragments of chicken carcasses are presented. Finally, some viewpoints on its future research and applicability in the modern poultry industry are proposed.

  13. Comparing image quality of print-on-demand books and photobooks from web-based vendors

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan; Bajorski, Peter; Burns, Peter; Fredericks, Erin; Rosen, Mitchell

    2010-01-01

    Because of the emergence of e-commerce and developments in print engines designed for economical output of very short runs, there are increased business opportunities and consumer options for print-on-demand books and photobooks. The current state of these printing modes allows for direct uploading of book files via the web, printing on nonoffset printers, and distributing by standard parcel or mail delivery services. The goal of this research is to assess the image quality of print-on-demand books and photobooks produced by various Web-based vendors and to identify correlations between psychophysical results and objective metrics. Six vendors were identified for one-off (single-copy) print-on-demand books, and seven vendors were identified for photobooks. Participants rank ordered overall quality of a subset of individual pages from each book, where the pages included text, photographs, or a combination of the two. Observers also reported overall quality ratings and price estimates for the bound books. Objective metrics of color gamut, color accuracy, accuracy of International Color Consortium profile usage, eye-weighted root mean square L*, and cascaded modulation transfer acutance were obtained and compared to the observer responses. We introduce some new methods for normalizing data as well as for strengthening the statistical significance of the results. Our approach includes the use of latent mixed-effect models. We found statistically significant correlation with overall image quality and some of the spatial metrics, but correlations between psychophysical results and other objective metrics were weak or nonexistent. Strong correlation was found between psychophysical results of overall quality assessment and estimated price associated with quality. The photobook set of vendors reached higher image-quality ratings than the set of print-on-demand vendors. However, the photobook set had higher image-quality variability.

  14. Effect of the glandular composition on digital breast tomosynthesis image quality and dose optimisation.

    PubMed

    Marques, T; Ribeiro, A; Di Maria, S; Belchior, A; Cardoso, J; Matela, N; Oliveira, N; Janeiro, L; Almeida, P; Vaz, P

    2015-07-01

    In the image quality assessment for digital breast tomosynthesis (DBT), a breast phantom with an average percentage of 50 % glandular tissue is seldom used, which may not be representative of the breast tissue composition of the women undergoing such examination. This work aims at studying the effect of the glandular composition of the breast on the image quality taking into consideration different sizes of lesions. Monte Carlo simulations were performed using the state-of-the-art computer program PENELOPE to validate the image acquisition system of the DBT equipment as well as to calculate the mean glandular dose for each projection image and for different breast compositions. The integrated PENELOPE imaging tool (PenEasy) was used to calculate, in mammography, for each clinical detection task the X-ray energy that maximises the figure of merit. All the 2D cranial-caudal projections for DBT were simulated and then underwent the reconstruction process applying the Simultaneous Algebraic Reconstruction Technique. Finally, through signal-to-noise ratio analysis, the image quality in DBT was assessed. PMID:25836692

  15. Effect of the glandular composition on digital breast tomosynthesis image quality and dose optimisation.

    PubMed

    Marques, T; Ribeiro, A; Di Maria, S; Belchior, A; Cardoso, J; Matela, N; Oliveira, N; Janeiro, L; Almeida, P; Vaz, P

    2015-07-01

    In the image quality assessment for digital breast tomosynthesis (DBT), a breast phantom with an average percentage of 50 % glandular tissue is seldom used, which may not be representative of the breast tissue composition of the women undergoing such examination. This work aims at studying the effect of the glandular composition of the breast on the image quality taking into consideration different sizes of lesions. Monte Carlo simulations were performed using the state-of-the-art computer program PENELOPE to validate the image acquisition system of the DBT equipment as well as to calculate the mean glandular dose for each projection image and for different breast compositions. The integrated PENELOPE imaging tool (PenEasy) was used to calculate, in mammography, for each clinical detection task the X-ray energy that maximises the figure of merit. All the 2D cranial-caudal projections for DBT were simulated and then underwent the reconstruction process applying the Simultaneous Algebraic Reconstruction Technique. Finally, through signal-to-noise ratio analysis, the image quality in DBT was assessed.

  16. Automating PACS quality control with the Vanderbilt image processing enterprise resource

    NASA Astrophysics Data System (ADS)

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2012-02-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  17. LANDSAT-4/5 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Malaret, E.; Bartolucci, L. A.; Lozano, D. F.; Anuta, P. E.; Mcgillem, C. D.

    1984-01-01

    A LANDSAT Thematic Mapper (TM) quality evaluation study was conducted to identify geometric and radiometric sensor errors in the post-launch environment. The study began with the launch of LANDSAT-4. Several error conditions were found, including band-to-band misregistration and detector-to detector radiometric calibration errors. Similar analysis was made for the LANDSAT-5 Thematic Mapper and compared with results for LANDSAT-4. Remaining band-to-band misregistration was found to be within tolerances and detector-to-detector calibration errors were not severe. More coherent noise signals were observed in TM-5 than in TM-4, although the amplitude was generally less. The scan direction differences observed in TM-4 were still evident in TM-5. The largest effect was in Band 4 where nearly a one digital count difference was observed. Resolution estimation was carried out using roads in TM-5 for the primary focal plane bands rather than field edges as in TM-4. Estimates using roads gave better resolution. Thermal IR band calibration studies were conducted and new nonlinear calibration procedures were defined for TM-5. The overall conclusion is that there are no first order errors in TM-5 and any remaining problems are second or third order.

  18. CID2013: a database for evaluating no-reference image quality assessment algorithms.

    PubMed

    Virtanen, Toni; Nuutinen, Mikko; Vaahteranoksa, Mikko; Oittinen, Pirkko; Hakkinen, Jukka

    2015-01-01

    This paper presents a new database, CID2013, to address the issue of using no-reference (NR) image quality assessment algorithms on images with multiple distortions. Current NR algorithms struggle to handle images with many concurrent distortion types, such as real photographic images captured by different digital cameras. The database consists of six image sets; on average, 30 subjects have evaluated 12-14 devices depicting eight different scenes for a total of 79 different cameras, 480 images, and 188 subjects (67% female). The subjective evaluation method was a hybrid absolute category rating-pair comparison developed for the study and presented in this paper. This method utilizes a slideshow of all images within a scene to allow the test images to work as references to each other. In addition to mean opinion score value, the images are also rated using sharpness, graininess, lightness, and color saturation scales. The CID2013 database contains images used in the experiments with the full subjective data plus extensive background information from the subjects. The database is made freely available for the research community. PMID:25494511

  19. A technique for multi-dimensional optimization of radiation dose, contrast dose, and image quality in CT imaging

    NASA Astrophysics Data System (ADS)

    Sahbaee, Pooyan; Abadi, Ehsan; Sanders, Jeremiah; Becchetti, Marc; Zhang, Yakun; Agasthya, Greeshma; Segars, Paul; Samei, Ehsan

    2016-03-01

    The purpose of this study was to substantiate the interdependency of image quality, radiation dose, and contrast material dose in CT towards the patient-specific optimization of the imaging protocols. The study deployed two phantom platforms. First, a variable sized phantom containing an iodinated insert was imaged on a representative CT scanner at multiple CTDI values. The contrast and noise were measured from the reconstructed images for each phantom diameter. Linearly related to iodine-concentration, contrast to noise ratio (CNR), was calculated for different iodine-concentration levels. Second, the analysis was extended to a recently developed suit of 58 virtual human models (5D-XCAT) with added contrast dynamics. Emulating a contrast-enhanced abdominal image procedure and targeting a peak-enhancement in aorta, each XCAT phantom was "imaged" using a CT simulation platform. 3D surfaces for each patient/size established the relationship between iodine-concentration, dose, and CNR. The Sensitivity of Ratio (SR), defined as ratio of change in iodine-concentration versus dose to yield a constant change in CNR was calculated and compared at high and low radiation dose for both phantom platforms. The results show that sensitivity of CNR to iodine concentration is larger at high radiation dose (up to 73%). The SR results were highly affected by radiation dose metric; CTDI or organ dose. Furthermore, results showed that the presence of contrast material could have a profound impact on optimization results (up to 45%).

  20. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  1. Benchmarking the performance of fixed-image receptor digital radiographic systems part 1: a novel method for image quality analysis.

    PubMed

    Lee, Kam L; Ireland, Timothy A; Bernardo, Michael

    2016-06-01

    This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.

  2. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    SciTech Connect

    Wells, J; Wilson, J; Zhang, Y; Samei, E; Ravin, Carl E.

    2014-06-01

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  3. Operation logic and functionality of automatic dose rate and image quality control of conventional fluoroscopy

    SciTech Connect

    Lin, Pei-Jan Paul

    2009-05-15

    New generation of fluoroscopic imaging systems is equipped with spectral shaping filters complemented with sophisticated automatic dose rate and image quality control logic called ''fluoroscopy curve'' or ''trajectory''. Such fluoroscopy curves were implemented first on cardiovascular angiographic imaging systems and are now available on conventional fluoroscopy equipment. This study aims to investigate the control logic operations under the fluoroscopy mode and acquisition mode (equivalent to the legacy spot filming) of a conventional fluoroscopy system typically installed for upper-lower gastrointestinal examinations, interventional endoscopy laboratories, gastrointestinal laboratory, and pain clinics.

  4. Impact of contact lens zone geometry and ocular optics on bifocal retinal image quality

    PubMed Central

    Bradley, Arthur; Nam, Jayoung; Xu, Renfeng; Harman, Leslie; Thibos, Larry

    2014-01-01

    Purpose To examine the separate and combined influences of zone geometry, pupil size, diffraction, apodisation and spherical aberration on the optical performance of concentric zonal bifocals. Methods Zonal bifocal pupil functions representing eye + ophthalmic correction were defined by interleaving wavefronts from separate optical zones of the bifocal. A two-zone design (a central circular inner zone surrounded by an annular outer-zone which is bounded by the pupil) and a five-zone design (a central small circular zone surrounded by four concentric annuli) were configured with programmable zone geometry, wavefront phase and pupil transmission characteristics. Using computational methods, we examined the effects of diffraction, Stiles Crawford apodisation, pupil size and spherical aberration on optical transfer functions for different target distances. Results Apodisation alters the relative weighting of each zone, and thus the balance of near and distance optical quality. When spherical aberration is included, the effective distance correction, add power and image quality depend on zone-geometry and Stiles Crawford Effect apodisation. When the outer zone width is narrow, diffraction limits the available image contrast when focused, but as pupil dilates and outer zone width increases, aberrations will limit the best achievable image quality. With two-zone designs, balancing near and distance image quality is not achieved with equal area inner and outer zones. With significant levels of spherical aberration, multi-zone designs effectively become multifocals. Conclusion Wave optics and pupil varying ocular optics significantly affect the imaging capabilities of different optical zones of concentric bifocals. With two-zone bifocal designs, diffraction, pupil apodisation spherical aberration, and zone size influence both the effective add power and the pupil size required to balance near and distance image quality. Five-zone bifocal designs achieve a high degree of

  5. Sci—Fri AM: Mountain — 02: A comparison of dose reduction methods on image quality for cone beam CT

    SciTech Connect

    Webb, R; Buckley, LA

    2014-08-15

    Modern radiotherapy uses highly conformai dose distributions and therefore relies on daily image guidance for accurate patient positioning. Kilovoltage cone beam CT is one technique that is routinely used for patient set-up and results in a high dose to the patient relative to planar imaging techniques. This study uses an Elekta Synergy linac equipped with XVI cone beam CT to investigate the impact of various imaging parameters on dose and image quality. Dose and image quality are assessed as functions of x-ray tube voltage, tube current and the number of projections in the scan. In each case, the dose measurements confirm that as each parameter increases the dose increases. The assessment of high contrast resolution shows little dependence on changes to the image technique. However, low contrast visibility suggests a trade off between dose and image quality. Particularly for changes in tube potential, the dose increases much faster as a function of voltage than the corresponding increase in low contrast image quality. This suggests using moderate values of the peak tube voltage (100 – 120 kVp) since higher values result in significant dose increases with little gain in image quality. Measurements also indicate that increasing tube current achieves the greatest degree of improvement in the low contrast visibility. The results of this study highlight the need to establish careful imaging protocols to limit dose to the patient and to limit changes to the imaging parameters to those cases where there is a clear clinical requirement for improved image quality.

  6. MEO based secured, robust, high capacity and perceptual quality image watermarking in DWT-SVD domain.

    PubMed

    Gunjal, Baisa L; Mali, Suresh N

    2015-01-01

    The aim of this paper is to present multiobjective evolutionary optimizer (MEO) based highly secured and strongly robust image watermarking technique using discrete wavelet transform (DWT) and singular value decomposition (SVD). Many researchers have failed to achieve optimization of perceptual quality and robustness with high capacity watermark embedding. Here, we achieved optimized peak signal to noise ratio (PSNR) and normalized correlation (NC) using MEO. Strong security is implemented through eight different security levels including watermark scrambling by Fibonacci-Lucas transformation (FLT). Haar wavelet is selected for DWT decomposition to compare practical performance of wavelets from different wavelet families. The technique is non-blind and tested with cover images of size 512x512 and grey scale watermark of size 256x256. The achieved perceptual quality in terms of PSNR is 79.8611dBs for Lena, 87.8446 dBs for peppers and 93.2853 dBs for lake images by varying scale factor K1 from 1 to 5. All candidate images used for testing namely Lena, peppers and lake images show exact recovery of watermark giving NC equals to 1. The robustness is tested against variety of attacks on watermarked image. The experimental demonstration proved that proposed method gives NC more than 0.96 for majority of attacks under consideration. The performance evaluation of this technique is found superior to all existing hybrid image watermarking techniques under consideration.

  7. MEO based secured, robust, high capacity and perceptual quality image watermarking in DWT-SVD domain.

    PubMed

    Gunjal, Baisa L; Mali, Suresh N

    2015-01-01

    The aim of this paper is to present multiobjective evolutionary optimizer (MEO) based highly secured and strongly robust image watermarking technique using discrete wavelet transform (DWT) and singular value decomposition (SVD). Many researchers have failed to achieve optimization of perceptual quality and robustness with high capacity watermark embedding. Here, we achieved optimized peak signal to noise ratio (PSNR) and normalized correlation (NC) using MEO. Strong security is implemented through eight different security levels including watermark scrambling by Fibonacci-Lucas transformation (FLT). Haar wavelet is selected for DWT decomposition to compare practical performance of wavelets from different wavelet families. The technique is non-blind and tested with cover images of size 512x512 and grey scale watermark of size 256x256. The achieved perceptual quality in terms of PSNR is 79.8611dBs for Lena, 87.8446 dBs for peppers and 93.2853 dBs for lake images by varying scale factor K1 from 1 to 5. All candidate images used for testing namely Lena, peppers and lake images show exact recovery of watermark giving NC equals to 1. The robustness is tested against variety of attacks on watermarked image. The experimental demonstration proved that proposed method gives NC more than 0.96 for majority of attacks under consideration. The performance evaluation of this technique is found superior to all existing hybrid image watermarking techniques under consideration. PMID:25830081

  8. TRIIG - Time-lapse reproduction of images through interactive graphics. [digital processing of quality hard copy

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.

  9. Current external beam radiation therapy quality assurance guidance: does it meet the challenges of emerging image-guided technologies?

    PubMed

    Palta, Jatinder R; Liu, Chihray; Li, Jonathan G

    2008-01-01

    The traditional prescriptive quality assurance (QA) programs that attempt to ensure the safety and reliability of traditional external beam radiation therapy are limited in their applicability to such advanced radiation therapy techniques as three-dimensional conformal radiation therapy, intensity-modulated radiation therapy, inverse treatment planning, stereotactic radiosurgery/radiotherapy, and image-guided radiation therapy. The conventional QA paradigm, illustrated by the American Association of Physicists in Medicine Radiation Therapy Committee Task Group 40 (TG-40) report, consists of developing a consensus menu of tests and device performance specifications from a generic process model that is assumed to apply to all clinical applications of the device. The complexity, variation in practice patterns, and level of automation of high-technology radiotherapy renders this "one-size-fits-all" prescriptive QA paradigm ineffective or cost prohibitive if the high-probability error pathways of all possible clinical applications of the device are to be covered. The current approaches to developing comprehensive prescriptive QA protocols can be prohibitively time consuming and cost ineffective and may sometimes fail to adequately safeguard patients. It therefore is important to evaluate more formal error mitigation and process analysis methods of industrial engineering to more optimally focus available QA resources on process components that have a significant likelihood of compromising patient safety or treatment outcomes.

  10. SU-E-I-59: Image Quality and Dose Measurement for Partial Cone-Beam CT

    SciTech Connect

    Abouei, E; Ford, N

    2014-06-01

    Purpose: To characterize performance of cone beam CT (CBCT) used in dentistry investigating quantitatively the image quality and radiation dose during dental CBCT over different settings for partial rotation of the x-ray tube. Methods: Image quality and dose measurements were done on a variable field of view (FOV) dental CBCT (Carestream 9300). X-ray parameters for clinical settings were adjustable for 2–10 mA, 60–90 kVp, and two optional voxel size values, but time was fixed for each FOV. Image quality was assessed by scanning cylindrical poly-methyl methacrylate (PMMA) image quality phantom (SEDENTEXCT IQ), and then the images were analyzed using ImageJ to calculate image quality parameters such as noise, uniformity, and contrast to noise ratio (CNR). A protocol proposed by SEDENTEXCT, dose index 1 (DI1), was applied to dose measurements obtained using a thimble ionization chamber and cylindrical PMMA dose index phantom (SEDENTEXCT DI). Dose distributions were obtained using Gafchromic film. The phantoms were positioned in the FOV to imitate a clinical positioning. Results: The image noise was 6–12.5% which, when normalized to the difference of mean voxel value of PMMA and air, was comparable between different FOVs. Uniformity was 93.5ß 99.7% across the images. CNR was 1.7–4.2 and 6.3–14.3 for LDPE and Aluminum, respectively. Dose distributions were symmetric about the rotation angle's bisector. For large and medium FOVs at 4 mA and 80–90 kVp, DI1 values were in the range of 1.26–3.23 mGy. DI1 values were between 1.01–1.93 mGy for small FOV (5×5 cm{sup 2}) at 4–5 mA and 75–84 kVp. Conclusion: Noise decreased by increasing kVp, and the CNR increased for each FOV. When FOV size increased, image noise increased and CNR decreased. DI1 values were increased by increasing tube current (mA), tube voltage (kVp), and/or FOV. Funding for this project from NSERC Discovery grant, UBC Faculty of Dentistry Research Equipment Grant and UBC Faculty of

  11. Image simulation and a model of noise power spectra across a range of mammographic beam qualities

    SciTech Connect

    Mackenzie, Alistair Dance, David R.; Young, Kenneth C.; Diaz, Oliver

    2014-12-15

    Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a reference beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modifi