Science.gov

Sample records for adequate image quality

  1. Achieving adequate BMP`s for stormwater quality management

    SciTech Connect

    Jones-Lee, A.; Lee, G.F.

    1994-12-31

    There is considerable controversy about the technical appropriateness and the cost-effectiveness of requiring cities to control contaminants in urban stormwater discharges to meet state water quality standards equivalent to US EPA numeric chemical water quality criteria. At this time and likely for the next 10 years, urban stormwater discharges will be exempt from regulation to achieve state water quality standards in receiving waters, owing to the high cost to cities of the management of contaminants in the stormwater runoff-discharge so as to prevent exceedances of water quality standards in the receiving waters. Instead of requiring the same degree of contaminant control for stormwater discharges as is required for point-source discharges of municipal and industrial wastewaters, those responsible for urban stormwater discharges will have to implement Best Management Practices (BMP`s) for contaminant control. The recommended approach for implementation of BMP`s involves the use of site-specific evaluations of what, if any, real problems (use impairment) are caused by stormwater-associated contaminants in the waters receiving that stormwater discharge. From this type of information BMP`s can then be developed to control those contaminants in stormwater discharges that are, in fact, impairing the beneficial uses of receiving waters.

  2. Looking for an adequate quality criterion for depth coding

    NASA Astrophysics Data System (ADS)

    Kerbiriou, Paul; Boisson, Guillaume

    2010-02-01

    This paper deals with 3DTV, more especially with 3D content transmission using disparity-based format. In 3DTV, the problem of measuring the stereoscopic quality of a 3D content remains open. Depth signal degradations due to 3DTV transmission will induce new types of artifacts in the final rendered views. Whereas we have some experience regarding the issue of texture coding, the issue of depth coding consequences is rather unknown. In this paper we focus on that particular issue. For that purpose we considered LDV contents (Layered Depth Video) and performed various encoding of their depth information - i.e. depth maps plus depth occlusions layers - using MPEG-4 Part 10 AVC/H.264 MVC. We investigate the impact of depth coding artifacts on the quality of the final views. To this end, we compute the correlation between depth coding errors with the quality of the synthesized views. The criteria used for synthesized views include MSE and structural criteria such as SSIM. The criteria used for depth maps include also a topological measure in the 3D space (the Hausdorff distance). Correlations between the two criteria sets are presented. Trends in function of quantization are also discussed.

  3. Do measures commonly used in body image research perform adequately with African American college women?

    PubMed

    Kashubeck-West, Susan; Coker, Angela D; Awad, Germine H; Stinson, Rebecca D; Bledman, Rashanta; Mintz, Laurie

    2013-07-01

    This study examines reliability and validity estimates for 3 widely used measures in body image research in a sample of African American college women (N = 278). Internal consistency estimates were adequate (α coefficients above .70) for all measures, and evidence of convergent and discriminant validity was found. Confirmatory factor analyses failed to replicate the hypothesized factor structures of these measures. Exploratory factor analyses indicated that 4 factors found for the Sociocultural Attitudes Toward Appearance Questionnaire were similar to the hypothesized subscales, with fewer items. The factors found for the Multidimensional Body-Self Relations Questionnaire-Appearance Scales and the Body Dissatisfaction subscale of the Eating Disorders Inventory-3 were not similar to the subscales developed by the scale authors. Validity and reliability evidence is discussed for the new factors. PMID:23731233

  4. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  5. Diet quality of Italian yogurt consumers: an application of the probability of adequate nutrient intake score (PANDiet).

    PubMed

    Mistura, Lorenza; D'Addezio, Laura; Sette, Stefania; Piccinelli, Raffaela; Turrini, Aida

    2016-01-01

    The diet quality in yogurt consumers and non-consumers was evaluated by applying the probability of adequate nutrient intake (PANDiet) index to a sample of adults and elderly from the Italian food consumption survey INRAN SCAI 2005-06. Overall, yogurt consumers had a significantly higher mean intake of energy, calcium and percentage of energy from total sugars whereas the mean percentage of energy from total fat, saturated fatty acid and total carbohydrate were significantly (p < 0.01) lower than in non-consumers. The PANDiet index was significantly higher in yogurt consumers than in non-consumers, (60.58 ± 0.33 vs. 58.58 ± 0.19, p < 0.001). The adequacy sub-score for 17 nutrients for which usual intake should be above the reference value was significantly higher among yogurt consumers. The items of calcium, potassium and riboflavin showed the major percentage variation between consumers and non-consumers. Yogurt consumers were more likely to have adequate intakes of vitamins and minerals, and a higher quality score of the diet. PMID:26906103

  6. Perceptual image quality and telescope performance ranking

    NASA Astrophysics Data System (ADS)

    Lentz, Joshua K.; Harvey, James E.; Marshall, Kenneth H.; Salg, Joseph; Houston, Joseph B.

    2010-08-01

    Launch Vehicle Imaging Telescopes (LVIT) are expensive, high quality devices intended for improving the safety of vehicle personnel, ground support, civilians, and physical assets during launch activities. If allowed to degrade from the combination of wear, environmental factors, and ineffective or inadequate maintenance, these devices lose their ability to provide adequate quality imagery to analysts to prevent catastrophic events such as the NASA Space Shuttle, Challenger, accident in 1986 and the Columbia disaster of 2003. A software tool incorporating aberrations and diffraction that was developed for maintenance evaluation and modeling of telescope imagery is presented. This tool provides MTF-based image quality metric outputs which are correlated to ascent imagery analysts' perception of image quality, allowing a prediction of usefulness of imagery which would be produced by a telescope under different simulated conditions.

  7. Evaluation of image quality

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.

  8. Image Enhancement, Image Quality, and Noise

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2005-01-01

    The Multiscale Retinex With Color Restoration (MSRCR) is a non-linear image enhancement algorithm that provides simultaneous dynamic range compression, color constancy and rendition. The overall impact is to brighten up areas of poor contrast/lightness but not at the expense of saturating areas of good contrast/brightness. The downside is that with the poor signal-to-noise ratio that most image acquisition devices have in dark regions, noise can also be greatly enhanced thus affecting overall image quality. In this paper, we will discuss the impact of the MSRCR on the overall quality of an enhanced image as a function of the strength of shadows in an image, and as a function of the root-mean-square (RMS) signal-to-noise (SNR) ratio of the image.

  9. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced

  10. Quality assessment for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2014-11-01

    Image quality assessment is an essential value judgement approach for many applications. Multi & hyper spectral imaging has more judging essentials than grey scale or RGB imaging and its image quality assessment job has to cover up all-around evaluating factors. This paper presents an integrating spectral imaging quality assessment project, in which spectral-based, radiometric-based and spatial-based statistical behavior for three hyperspectral imagers are jointly executed. Spectral response function is worked out based on discrete illumination images and its spectral performance is deduced according to its FWHM and spectral excursion value. Radiometric response ability of different spectral channel under both on-ground and airborne imaging condition is judged by SNR computing based upon local RMS extraction and statistics method. Spatial response evaluation of the spectral imaging instrument is worked out by MTF computing with slanted edge analysis method. Reported pioneering systemic work in hyperspectral imaging quality assessment is carried out with the help of several domestic dominating work units, which not only has significance in the development of on-ground and in-orbit instrument performance evaluation technique but also takes on reference value for index demonstration and design optimization for instrument development.

  11. Foveated wavelet image quality index

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Bovik, Alan C.; Lu, Ligang; Kouloheris, Jack L.

    2001-12-01

    The human visual system (HVS) is highly non-uniform in sampling, coding, processing and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. Currently, most image quality measurement methods are designed for uniform resolution images. These methods do not correlate well with the perceived foveated image quality. Wavelet analysis delivers a convenient way to simultaneously examine localized spatial as well as frequency information. We developed a new image quality metric called foveated wavelet image quality index (FWQI) in the wavelet transform domain. FWQI considers multiple factors of the HVS, including the spatial variance of the contrast sensitivity function, the spatial variance of the local visual cut-off frequency, the variance of human visual sensitivity in different wavelet subbands, and the influence of the viewing distance on the display resolution and the HVS features. FWQI can be employed for foveated region of interest (ROI) image coding and quality enhancement. We show its effectiveness by using it as a guide for optimal bit assignment of an embedded foveated image coding system. The coding system demonstrates very good coding performance and scalability in terms of foveated objective as well as subjective quality measurement.

  12. Video and image quality

    NASA Astrophysics Data System (ADS)

    Aldridge, Jim

    1995-09-01

    This paper presents some of the results of a UK government research program into methods of improving the effectiveness of CCTV surveillance systems. The paper identifies the major components of video security systems and primary causes of unsatisfactory images. A method is outline for relating the picture detail limitations imposed by each system component on overall system performance. The paper also points out some possible difficulties arising from the use of emerging new technology.

  13. Fovea based image quality assessment

    NASA Astrophysics Data System (ADS)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  14. Landsat image data quality studies

    NASA Technical Reports Server (NTRS)

    Schueler, C. F.; Salomonson, V. V.

    1985-01-01

    Preliminary results of the Landsat-4 Image Data Quality Analysis (LIDQA) program to characterize the data obtained using the Thematic Mapper (TM) instrument on board the Landsat-4 and Landsat-5 satellites are reported. TM design specifications were compared to the obtained data with respect to four criteria, including spatial resolution; geometric fidelity; information content; and image relativity to Multispectral Scanner (MSS) data. The overall performance of the TM was rated excellent despite minor instabilities and radiometric anomalies in the data. Spatial performance of the TM exceeded design specifications in terms of both image sharpness and geometric accuracy, and the image utility of the TM data was at least twice as high as MSS data. The separability of alfalfa and sugar beet fields in a TM image is demonstrated.

  15. Are the defined substrate-based methods adequate to determine the microbiological quality of natural recreational waters?

    PubMed

    Valente, Marta Sofia; Pedro, Paulo; Alonso, M Carmen; Borrego, Juan J; Dionísio, Lídia

    2010-03-01

    Monitoring the microbiological quality of water used for recreational activities is very important to human public health. Although the sanitary quality of recreational marine waters could be evaluated by standard methods, they are time-consuming and need confirmation. For these reasons, faster and more sensitive methods, such as the defined substrate-based technology, have been developed. In the present work, we have compared the standard method of membrane filtration using Tergitol-TTC agar for total coliforms and Escherichia coli, and Slanetz and Bartley agar for enterococci, and the IDEXX defined substrate technology for these faecal pollution indicators to determine the microbiological quality of natural recreational waters. ISO 17994:2004 standard was used to compare these methods. The IDEXX for total coliforms and E. coli, Colilert, showed higher values than those obtained by the standard method. Enterolert test, for the enumeration of enterococci, showed lower values when compared with the standard method. It may be concluded that more studies to evaluate the precision and accuracy of the rapid tests are required in order to apply them for routine monitoring of marine and freshwater recreational bathing areas. The main advantages of these methods are that they are more specific, feasible and simpler than the standard methodology. PMID:20009243

  16. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery

  17. Automated quality assessment in three-dimensional breast ultrasound images.

    PubMed

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects. PMID:27158633

  18. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  19. Image quality requirements for the digitization of photographic collections

    NASA Astrophysics Data System (ADS)

    Frey, Franziska S.; Suesstrunk, Sabine E.

    1996-02-01

    Managers of photographic collections in libraries and archives are exploring digital image database systems, but they usually have few sources of technical guidance and analysis available. Correctly digitizing photographs puts high demands on the imaging system and the human operators involved in the task. Pictures are very dense with information, requiring high-quality scanning procedures. In order to provide advice to libraries and archives seeking to digitize photographic collections, it is necessary to thoroughly understand the nature of the various originals and the purposes for digitization. Only with this understanding is it possible to choose adequate image quality for the digitization process. The higher the quality, the more expertise, time, and cost is likely to be involved in generating and delivering the image. Despite all the possibilities for endless copying, distributing, and manipulating of digital images, image quality choices made when the files are first created have the same 'finality' that they have in conventional photography. They will have a profound effect on project cost, the value of the final project to researchers, and the usefulness of the images as preservation surrogates. Image quality requirements therefore have to be established carefully before a digitization project starts.

  20. Image quality assessment in the low quality regime

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2012-03-01

    Traditionally, image quality estimators have been designed and optimized to operate over the entire quality range of images in a database, from very low quality to visually lossless. However, if quality estimation is limited to a smaller quality range, their performances drop dramatically, and many image applications only operate over such a smaller range. This paper is concerned with one such range, the low-quality regime, which is defined as the interval of perceived quality scores where there exists a linear relationship between the perceived quality scores and the perceived utility scores and exists at the low-quality end of image databases. Using this definition, this paper describes a subjective experiment to determine the low-quality regime for databases of distorted images that include perceived quality scores but not perceived utility scores, such as CSIQ and LIVE. The performances of several image utility and quality estimators are evaluated in the low-quality regime, indicating that utility estimators can be successfully applied to estimate perceived quality in this regime. Omission of the lowestfrequency image content is shown to be crucial to the performances of both kinds of estimators. Additionally, this paper establishes an upper-bound for the performances of quality estimators in the LQR, using a family of quality estimators based on VIF. The resulting optimal quality estimator indicates that estimating quality in the low-quality regime is robust to exact frequency pooling weights, and that near-optimal performance can be achieved by a variety of estimators providing that they substantially emphasize the appropriate frequency content.

  1. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  2. The utilization of orbital images as an adequate form of control of preserved areas. [Araguaia National Park, Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Dossantos, J. R.

    1981-01-01

    The synoptic view and the repetitive acquisition of LANDSAT imagery provide precise information, in real-time, for monitoring preserved areas based on spectral, temporal and spatial properties. The purpose of this study was to monitor, with the use of multispectral imagery, the systematic annual burning, which causes the degradation of ecosystems in the National Park of Araguaia. LANDSAT imagery of channel 5 (0.6 a 0.7 microns) and 7 (0.8 a 1.1 microns), at the scale of 1:250.000, were used to identify and delimit vegetation units and burned area, based on photointerpretation parameter of tonality. The results show that the gallery forest can be discriminated from the seasonally flooded 'campo cerrado', and that 4,14% of the study area was burned. Conclusions point out that the LANDSAT images can be used for the implementation of environmental protection in national parks.

  3. Infrared image quality evaluation method without reference image

    NASA Astrophysics Data System (ADS)

    Yue, Song; Ren, Tingting; Wang, Chengsheng; Lei, Bo; Zhang, Zhijie

    2013-09-01

    Since infrared image quality depends on many factors such as optical performance and electrical noise of thermal imager, image quality evaluation becomes an important issue which can conduce to both image processing afterward and capability improving of thermal imager. There are two ways of infrared image quality evaluation, with or without reference image. For real-time thermal image, the method without reference image is preferred because it is difficult to get a standard image. Although there are various kinds of methods for evaluation, there is no general metric for image quality evaluation. This paper introduces a novel method to evaluate infrared image without reference image from five aspects: noise, clarity, information volume and levels, information in frequency domain and the capability of automatic target recognition. Generally, the basic image quality is obtained from the first four aspects, and the quality of target is acquired from the last aspect. The proposed method is tested on several infrared images captured by different thermal imagers. Calculate the indicators and compare with human vision results. The evaluation shows that this method successfully describes the characteristics of infrared image and the result is consistent with human vision system.

  4. Are Lateral Electronic Portal Images Adequate for Accurate On-Line Daily Targeting of the Prostate? Results of a Prospective Study

    SciTech Connect

    Lometti, Michael W. Thurston, Damon; Aubin, Michele; Bock, Andrea; Verhey, Lynn; Lockhart, James M.; Bland, Roger; Pouliot, Jean; Roach, Mack

    2008-04-01

    The purpose of this report was to evaluate the magnitude of the error that would be introduced if only a lateral (LAT) portal image, as opposed to a pair of orthogonal images, was used to verify and correct daily setup errors and organ motion in external beam radiation therapy (EBRT) of prostate cancer. The 3-dimensional (3D) coordinates of gold markers from 12 consecutive prostate patients were reconstructed using a pair of orthogonal images. The data were re-analyzed using only the LAT images. Couch moves from the 2-dimensional (2D)-only data were compared with the complete 3D data set. The 2D-only data provided couch moves that differed on average from the 3D data by 2.3 {+-} 3.0, 0.0 {+-} 0.0, and 0.8 {+-} 1.0 mm in the Lat, AP, and SI directions, respectively. Along AP and SI axes, the LAT image provided positional information similar to the orthogonal pair. The error along the LAT axis may be acceptable provided lateral margins are large enough. A LAT-only setup protocol reduces patient treatment times and increases patient throughput. In most circumstances, with exceptions such as morbidly obese patients, acquisition of only a LAT image for daily targeting of the prostate will provide adequate positional precision.

  5. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score. PMID:26513783

  6. Wavelet based image quality self measurements

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2010-04-01

    Noise in general is considered to be degradation in image quality. Moreover image quality is measured based on the appearance of the image edges and their clarity. Most of the applications performance is affected by image quality and level of different types of degradation. In general measuring image quality and identifying the type of noise or degradation is considered to be a key factor in raising the applications performance, this task can be very challenging. Wavelet transform now a days, is widely used in different applications. These applications are mostly benefiting from the wavelet localisation in the frequency domain. The coefficients of the high frequency sub-bands in wavelet domain are represented by Laplace histogram. In this paper we are proposing to use the Laplace distribution histogram to measure the image quality and also to identify the type of degradation affecting the given image. Image quality and the level of degradation are mostly measured using a reference image with reasonable quality. The discussed Laplace distribution histogram provides a self testing measurement for the quality of the image. This measurement is based on constructing the theoretical Laplace distribution histogram of the high frequency wavelet sub-band. This construction is based on the actual standard deviation, then to be compared with the actual Laplace distribution histogram. The comparison is performed using histogram intersection method. All the experiments are performed using the extended Yale database.

  7. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  8. Estimation of adequate setup margins and threshold for position errors requiring immediate attention in head and neck cancer radiotherapy based on 2D image guidance

    PubMed Central

    2013-01-01

    Background We estimated sufficient setup margins for head-and-neck cancer (HNC) radiotherapy (RT) when 2D kV images are utilized for routine patient setup verification. As another goal we estimated a threshold for the displacements of the most important bony landmarks related to the target volumes requiring immediate attention. Methods We analyzed 1491 orthogonal x-ray images utilized in RT treatment guidance for 80 HNC patients. We estimated overall setup errors and errors for four subregions to account for patient rotation and deformation: the vertebrae C1-2, C5-7, the occiput bone and the mandible. Setup margins were estimated for two 2D image guidance protocols: i) imaging at first three fractions and weekly thereafter and ii) daily imaging. Two 2D image matching principles were investigated: i) to the vertebrae in the middle of planning target volume (PTV) (MID_PTV) and ii) minimizing maximal position error for the four subregions (MIN_MAX). The threshold for the position errors was calculated with two previously unpublished methods based on the van Herk’s formula and clinical data by retaining a margin of 5 mm sufficient for each subregion. Results Sufficient setup margins to compensate the displacements of the subregions were approximately two times larger than were needed to compensate setup errors for rigid target. Adequate margins varied from 2.7 mm to 9.6 mm depending on the subregions related to the target, applied image guidance protocol and early correction of clinically important systematic 3D displacements of the subregions exceeding 4 mm. The MIN_MAX match resulted in smaller margins but caused an overall shift of 2.5 mm for the target center. Margins ≤ 5mm were sufficient with the MID_PTV match only through application of daily 2D imaging and the threshold of 4 mm to correct systematic displacement of a subregion. Conclusions Adequate setup margins depend remarkably on the subregions related to the target volume. When the systematic 3D

  9. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance. PMID:27468398

  10. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression. PMID:23589187

  11. Cognitive issues in image quality measurement

    NASA Astrophysics Data System (ADS)

    de Ridder, Huib

    2001-01-01

    Designers of imaging systems, image processing algorithms, etc., usually take for granted that methods for assessing perceived image quality produce unbiased estimates of the viewers' quality impression. Quality judgments, however, are affected by the judgment strategies induced by the experimental procedures. In this paper the results of two experiments are presented illustrating the influence judgment strategies can have on quality judgments. The first experiment concerns contextual effects due to the composition of the stimulus sets. Subjects assessed the sharpness of two differently composed sets of blurred versions of one static image. The sharpness judgments for the blurred images present in both stimulus sets were found to be dependent on the composition of the set as well as the scaling technique employed. In the second experiment subjects assessed either the overall quality or the overall impairment of manipulated and standard JPEG-coded images containing two main artifacts. The results indicate a systematic different between the quality and impairment judgments that could be interpreted as instruction-based different weighting of the two artifacts. Again, some influence of scaling techniques was observed. The results of both experiments underscore the important role judgment strategies play in the psychophysical evaluation of image quality. Ignoring this influence on quality judgments may lead to invalid conclusions about the viewers' impression of image quality.

  12. Phase congruency assesses hyperspectral image quality

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhong, Cheng

    2012-10-01

    Blind image quality assessment (QA) is a tough task especially for hyperspectral imagery which is degraded by noise, distortion, defocus, and other complex factors. Subjective hyperspectral imagery QA methods are basically measured the degradation of image from human perceptual visual quality. As the most important image quality measurement features, noise and blur, determined the image quality greatly, are employed to predict the objective hyperspectral imagery quality of each band. We demonstrate a novel no-reference hyperspectral imagery QA model based on phase congruency (PC), which is a dimensionless quantity and provides an absolute measure of the significance of feature point. First, Log Gabor wavelet is used to calculate the phase congruency of frequencies of each band image. The relationship between noise and PC can be derived from above transformation under the assumption that noise is additive. Second, PC focus measure evaluation model is proposed to evaluate blur caused by different amounts of defocus. The ratio and mean factors of edge blur level and noise is defined to assess the quality of each band image. This image QA method obtains excellent correlation with subjective image quality score without any reference. Finally, the PC information is utilized to improve the quality of some bands images.

  13. Imaging surveillance programs for women at high breast cancer risk in Europe: Are women from ethnic minority groups adequately included? (Review).

    PubMed

    Belkić, Karen; Cohen, Miri; Wilczek, Brigitte; Andersson, Sonia; Berman, Anne H; Márquez, Marcela; Vukojević, Vladana; Mints, Miriam

    2015-09-01

    Women from ethnic minority groups, including immigrants and refugees are reported to have low breast cancer (BC) screening rates. Active, culturally-sensitive outreach is vital for increasing participation of these women in BC screening programs. Women at high BC risk and who belong to an ethnic minority group are of special concern. Such women could benefit from ongoing trials aimed at optimizing screening strategies for early BC detection among those at increased BC risk. Considering the marked disparities in BC survival in Europe and its enormous and dynamic ethnic diversity, these issues are extremely timely for Europe. We systematically reviewed the literature concerning European surveillance studies that had imaging in the protocol and that targeted women at high BC risk. The aim of the present review was thereby to assess the likelihood that women at high BC risk from minority ethnic groups were adequately included in these surveillance programs. Twenty-seven research groups in Europe reported on their imaging surveillance programs for women at increased BC risk. The benefit of strategies such as inclusion of magnetic resonance imaging and/or more intensive screening was clearly documented for the participating women at increased BC risk. However, none of the reports indicated that sufficient outreach was performed to ensure that women at increased BC risk from minority ethnic groups were adequately included in these surveillance programs. On the basis of this systematic review, we conclude that the specific screening needs of ethnic minority women at increased BC risk have not yet been met in Europe. Active, culturally-sensitive outreach is needed to identify minority women at increased BC risk and to facilitate their inclusion in on-going surveillance programs. It is anticipated that these efforts would be most effective if coordinated with the development of European-wide, population-based approaches to BC screening. PMID:26134040

  14. Retinal image quality assessment using generic features

    NASA Astrophysics Data System (ADS)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  15. Body image and quality of life in a Spanish population

    PubMed Central

    Lobera, Ignacio Jáuregui; Ríos, Patricia Bolaños

    2011-01-01

    Purpose The aim of the current study was to analyze the psychometric properties, factor structure, and internal consistency of the Spanish version of the Body Image Quality of Life Inventory (BIQLI-SP) as well as its test–retest reliability. Further objectives were to analyze different relationships with key dimensions of psychosocial functioning (ie, self-esteem, presence of psychopathological symptoms, eating and body image-related problems, and perceived stress) and to evaluate differences in body image quality of life due to gender. Patients and methods The sample comprised 417 students without any psychiatric history, recruited from the Pablo de Olavide University and the University of Seville. There were 140 men (33.57%) and 277 women (66.43%), and the mean age was 21.62 years (standard deviation = 5.12). After obtaining informed consent from all participants, the following questionnaires were administered: BIQLI, Eating Disorder Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results The BIQLI-SP shows adequate psychometric properties, and it may be useful to determine the body image quality of life in different physical conditions. A more positive body image quality of life is associated with better self-esteem, better psychological wellbeing, and fewer eating-related dysfunctional attitudes, this being more evident among women. Conclusion The BIQLI-SP may be useful to determine the body image quality of life in different contexts with regard to dermatology, cosmetic and reconstructive surgery, and endocrinology, among others. In these fields of study, a new trend has emerged to assess body image-related quality of life. PMID:21403794

  16. Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics

    NASA Astrophysics Data System (ADS)

    King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.

    2011-03-01

    The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.

  17. Evaluation of overall setup accuracy and adequate setup margins in pelvic image-guided radiotherapy: Comparison of the male and female patients

    SciTech Connect

    Laaksomaa, Marko; Kapanen, Mika; Tulijoki, Tapio; Peltola, Seppo; Hyödynmaa, Simo; Kellokumpu-Lehtinen, Pirkko-Liisa

    2014-04-01

    We evaluated adequate setup margins for the radiotherapy (RT) of pelvic tumors based on overall position errors of bony landmarks. We also estimated the difference in setup accuracy between the male and female patients. Finally, we compared the patient rotation for 2 immobilization devices. The study cohort included consecutive 64 male and 64 female patients. Altogether, 1794 orthogonal setup images were analyzed. Observer-related deviation in image matching and the effect of patient rotation were explicitly determined. Overall systematic and random errors were calculated in 3 orthogonal directions. Anisotropic setup margins were evaluated based on residual errors after weekly image guidance. The van Herk formula was used to calculate the margins. Overall, 100 patients were immobilized with a house-made device. The patient rotation was compared against 28 patients immobilized with CIVCO's Kneefix and Feetfix. We found that the usually applied isotropic setup margin of 8 mm covered all the uncertainties related to patient setup for most RT treatments of the pelvis. However, margins of even 10.3 mm were needed for the female patients with very large pelvic target volumes centered either in the symphysis or in the sacrum containing both of these structures. This was because the effect of rotation (p ≤ 0.02) and the observer variation in image matching (p ≤ 0.04) were significantly larger for the female patients than for the male patients. Even with daily image guidance, the required margins remained larger for the women. Patient rotations were largest about the lateral axes. The difference between the required margins was only 1 mm for the 2 immobilization devices. The largest component of overall systematic position error came from patient rotation. This emphasizes the need for rotation correction. Overall, larger position errors and setup margins were observed for the female patients with pelvic cancer than for the male patients.

  18. Seven challenges for image quality research

    NASA Astrophysics Data System (ADS)

    Chandler, Damon M.; Alam, Md M.; Phan, Thien D.

    2014-02-01

    Image quality assessment has been a topic of recent intense research due to its usefulness in a wide variety of applications. Owing in large part to efforts within the HVEI community, image-quality research has particularly benefited from improved models of visual perception. However, over the last decade, research in image quality has largely shifted from the previous broader objective of gaining a better understanding of human vision, to the current limited objective of better fitting the available ground-truth data. In this paper, we discuss seven open challenges in image quality research. These challenges stem from lack of complete perceptual models for: natural images; suprathreshold distortions; interactions between distortions and images; images containing multiple and nontraditional distortions; and images containing enhancements. We also discuss challenges related to computational efficiency. The objective of this paper is not only to highlight the limitations in our current knowledge of image quality, but to also emphasize the need for additional fundamental research in quality perception.

  19. Combined terahertz imaging system for enhanced imaging quality

    NASA Astrophysics Data System (ADS)

    Dolganova, Irina N.; Zaytsev, Kirill I.; Metelkina, Anna A.; Yakovlev, Egor V.; Karasik, Valeriy E.; Yurchenko, Stanislav O.

    2016-06-01

    An improved terahertz (THz) imaging system is proposed for enhancing image quality. Imaging scheme includes THz source and detection system operated in active mode as well as in passive one. In order to homogeneously illuminate the object plane the THz reshaper is proposed. The form and internal structure of the reshaper were studied by the numerical simulation. Using different test-objects we compare imaging quality in active and passive THz imaging modes. Imaging contrast and modulation transfer functions in active and passive imaging modes show drawbacks of them in high and low spatial frequencies, respectively. The experimental results confirm the benefit of combining both imaging modes into hybrid one. The proposed algorithm of making hybrid THz image is an effective approach of retrieving maximum information about the remote object.

  20. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  1. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  2. Perceptual image quality: Effects of tone characteristics

    PubMed Central

    Delahunt, Peter B.; Zhang, Xuemei; Brainard, David H.

    2007-01-01

    Tone mapping refers to the conversion of luminance values recorded by a digital camera or other acquisition device, to the luminance levels available from an output device, such as a monitor or a printer. Tone mapping can improve the appearance of rendered images. Although there are a variety of algorithms available, there is little information about the image tone characteristics that produce pleasing images. We devised an experiment where preferences for images with different tone characteristics were measured. The results indicate that there is a systematic relation between image tone characteristics and perceptual image quality for images containing faces. For these images, a mean face luminance level of 46–49 CIELAB L* units and a luminance standard deviation (taken over the whole image) of 18 CIELAB L* units produced the best renderings. This information is relevant for the design of tone-mapping algorithms, particularly as many images taken by digital camera users include faces. PMID:17235365

  3. Image Quality Ranking Method for Microscopy.

    PubMed

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E; Hänninen, Pekka E

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  4. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  5. Image Quality Ranking Method for Microscopy

    NASA Astrophysics Data System (ADS)

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-07-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics.

  6. End-to-end image quality assessment

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  7. MIQM: a multicamera image quality measure.

    PubMed

    Solh, Mashhour; AlRegib, Ghassan

    2012-09-01

    Although several subjective and objective quality assessment methods have been proposed in the literature for images and videos from single cameras, no comparable effort has been devoted to the quality assessment of multicamera images. With the increasing popularity of multiview applications, quality assessment of multicamera images and videos is becoming fundamental to the development of these applications. Image quality is affected by several factors, such as camera configuration, number of cameras, and the calibration process. In order to develop an objective metric specifically designed for multicamera systems, we identified and quantified two types of visual distortions in multicamera images: photometric distortions and geometric distortions. The relative distortion between individual camera scenes is a major factor in determining the overall perceived quality. In this paper, we show that such distortions can be translated into luminance, contrast, spatial motion, and edge-based structure components. We propose three different indices that can quantify these components. We provide examples to demonstrate the correlation among these components and the corresponding indices. Then, we combine these indices into one multicamera image quality measure (MIQM). Results and comparisons with other measures, such as peak signal-to noise ratio, mean structural similarity, and visual information fidelity show that MIQM outperforms other measures in capturing the perceptual fidelity of multicamera images. Finally, we verify the results against subjective evaluation. PMID:22645264

  8. No-reference stereoscopic image quality assessment

    NASA Astrophysics Data System (ADS)

    Akhter, Roushain; Parvez Sazzad, Z. M.; Horita, Y.; Baltes, J.

    2010-02-01

    Display of stereo images is widely used to enhance the viewing experience of three-dimensional imaging and communication systems. In this paper, we propose a method for estimating the quality of stereoscopic images using segmented image features and disparity. This method is inspired by the human visual system. We believe the perceived distortion and disparity of any stereoscopic display is strongly dependent on local features, such as edge (non-plane) and non-edge (plane) areas. Therefore, a no-reference perceptual quality assessment is developed for JPEG coded stereoscopic images based on segmented local features of artifacts and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness and the blur within the block of images are evaluated in this method. Two subjective stereo image databases are used to evaluate the performance of our method. The subjective experiments results indicate our model has sufficient prediction performance.

  9. Rendered virtual view image objective quality assessment

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Li, Xiangchun; Zhang, Yi; Peng, Kai

    2013-08-01

    The research on rendered virtual view image (RVVI) objective quality assessment is important for integrated imaging system and image quality assessment (IQA). Traditional IQA algorithms cannot be applied directly on the system receiver-side due to interview displacement and the absence of original reference. This study proposed a block-based neighbor reference (NbR) IQA framework for RVVI IQA. Neighbor views used for rendering are employed for quality assessment in the proposed framework. A symphonious factor handling noise and interview displacement is defined and applied to evaluate the contribution of the obtained quality index in each block pair. A three-stage experiment scheme is also presented to testify the proposed framework and evaluate its homogeneity performance when comparing to full reference IQA. Experimental results show the proposed framework is useful in RVVI objective quality assessment at system receiver-side and benchmarking different rendering algorithms.

  10. Continuous assessment of perceptual image quality

    NASA Astrophysics Data System (ADS)

    Hamberg, Roelof; de Ridder, Huib

    1995-12-01

    The study addresses whether subjects are able to assess the perceived quality of an image sequence continuously. To this end, a new method for assessing time-varying perceptual image quality is presented by which subjects continuously indicate the perceived strength of image quality by moving a slider along a graphical scale. The slider's position on this scale is sampled every second. In this way, temporal variations in quality can be monitored quantitatively, and a means is provided by which differences between, for example, alternative transmission systems can be analyzed in an informative way. The usability of this method is illustrated by an experiment in which, for a period of 815 s, subjects assessed the quality of still pictures comprising time-varying degrees of sharpness. Copyright (c) 1995 Optical Society of America

  11. Quality measures in applications of image restoration.

    PubMed

    Kriete, A; Naim, M; Schafer, L

    2001-01-01

    We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations. PMID:11587324

  12. Toward clinically relevant standardization of image quality.

    PubMed

    Samei, Ehsan; Rowberg, Alan; Avraham, Ellie; Cornelius, Craig

    2004-12-01

    In recent years, notable progress has been made on standardization of medical image presentations in the definition and implementation of the Digital Imaging and Communications in Medicine (DICOM) Grayscale Standard Display Function (GSDF). In parallel, the American Association of Physicists in Medicine (AAPM) Task Group 18 has provided much needed guidelines and tools for visual and quantitative assessment of medical display quality. In spite of these advances, however, there are still notable gaps in the effectiveness of DICOM GSDF to assure consistent and high-quality display of medical images. In additions the degree of correlation between display technical data and diagnostic usability and performance of displays remains unclear. This article proposes three specific steps that DICOM, AAPM, and ACR may collectively take to bridge the gap between technical performance and clinical use: (1) DICOM does not provide means and acceptance criteria to evaluate the conformance of a display device to GSDF or to address other image quality characteristics. DICOM can expand beyond luminance response, extending the measurable, quantifiable elements of TG18 such as reflection and resolution. (2) In a large picture archiving and communication system (PACS) installation, it is critical to continually track the appropriate use and performance of multiple display devices. DICOM may help with this task by adding a Device Service Class to the standard to provide for communication and control of image quality parameters between applications and devices, (3) The question of clinical significance of image quality metrics has rarely been addressed by prior efforts. In cooperation with AAPM, the American College of Radiology (ACR), and the Society for Computer Applications in Radiology (SCAR), DICOM may help to initiate research that will determine the clinical consequence of variations in image quality metrics (eg, GSDF conformance) and to define what constitutes image quality from a

  13. Image quality evaluation using moving targets

    NASA Astrophysics Data System (ADS)

    Artmann, Uwe

    2013-03-01

    The basic concept of testing a digital imaging device is to reproduce a known target and to analyze the resulting image. This semi-reference approach can be used for various different aspects of image quality. Each part of the imaging chain can have an influence on the results: lens, sensor, image processing and the target itself. The results are valid only for the complete system. If we want to test a single component, we have to make sure that we change only one and keep all others constant. When testing mobile imaging devices, we run into the problem that hardly anything can be manually controlled by the tester. Manual exposure control is not available for most devices, the focus cannot be influenced and hardly any settings for the image processing are available. Due to the limitations in the hardware, the image pipeline in the digital signal processor (DSP) of mobile imaging devices is a critical part of the image quality evaluation. The processing power of the DSPs allows sharpening, tonal correction and noise reduction to be non-linear and adaptive. This makes it very hard to describe the behavior for an objective image quality evaluation. The image quality is highly influenced by the signal processing for noise and resolution and the processing is the main reason for the loss of low contrast, _ne details, the so called texture blur. We present our experience to describe the image processing in more detail. All standardized test methods use a defined chart and require, that the chart and the camera are not moved in any way during test. In this paper, we present our results investigating the influence of chart movement during the test. Different structures, optimized for different aspects of image quality evaluation, are moved with a defined speed during the capturing process. The chart movement will change the input for the signal processing depending on the speed of the target during the test. The basic theoretical changes in the image will be the

  14. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Rhodes, Laura A.; Davies, Andrew G.

    2015-03-01

    Dynamic X-ray imaging systems are used for interventional cardiac procedures to treat coronary heart disease. X-ray settings are controlled automatically by specially-designed X-ray dose control mechanisms whose role is to ensure an adequate level of image quality is maintained with an acceptable radiation dose to the patient. Current commonplace dose control designs quantify image quality by performing a simple technical measurement directly from the image. However, the utility of cardiac X-ray images is in their interpretation by a cardiologist during an interventional procedure, rather than in a technical measurement. With the long term goal of devising a clinically-relevant image quality metric for an intelligent dose control system, we aim to investigate the relationship of image noise with clinical professionals' perception of dynamic image sequences. Computer-generated noise was added, in incremental amounts, to angiograms of five different patients selected to represent the range of adult cardiac patient sizes. A two alternative forced choice staircase experiment was used to determine the amount of noise which can be added to a patient image sequences without changing image quality as perceived by clinical professionals. Twenty-five viewing sessions (five for each patient) were completed by thirteen observers. Results demonstrated scope to increase the noise of cardiac X-ray images by up to 21% +/- 8% before it is noticeable by clinical professionals. This indicates a potential for 21% radiation dose reduction since X-ray image noise and radiation dose are directly related; this would be beneficial to both patients and personnel.

  15. Propagation, structural similarity, and image quality

    NASA Astrophysics Data System (ADS)

    Pérez, Jorge; Mas, David; Espinosa, Julián; Vázquez, Carmen; Illueca, Carlos

    2012-06-01

    Retinal image quality is usually analysed through different parameters typical from instrumental optics, i.e, PSF, MTF and wavefront aberrations. Although these parameters are important, they are hard to translate to visual quality parameters since human vision exhibits some tolerance to certain aberrations. This is particularly important in postsurgery eyes, where non-common aberration are induced and their effects on the final image quality is not clear. Natural images usually show a strong dependency between one point and its neighbourhood. This fact helps to the image interpretation and should be considered when determining the final image quality. The aim of this work is to propose an objective index which allows comparing natural images on the retina and, from them, to obtain relevant information abut the visual quality of a particular subject. To this end, we propose a individual eye modelling. The morphological data of the subject's eye are considered and the light propagation through the ocular media is calculated by means of a Fourier-transform-based method. The retinal PSF so obtained is convolved with the natural scene under consideration and the obtained image is compared with the ideal one by using the structural similarity index. The technique is applied on 2 eyes with a multifocal corneal profile (PresbyLasik) and can be used to determine the real extension of the achieved pseudoaccomodation.

  16. No training blind image quality assessment

    NASA Astrophysics Data System (ADS)

    Chu, Ying; Mou, Xuanqin; Ji, Zhen

    2014-03-01

    State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further predict the quality scores of the testing images. However, these methods need complicated training and learning, and the evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics without training and learning are firstly proposed. The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the dictionary. Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly acceptable.

  17. Image Acquisition and Quality in Digital Radiography.

    PubMed

    Alexander, Shannon

    2016-09-01

    Medical imaging has undergone dramatic changes and technological breakthroughs since the introduction of digital radiography. This article presents information on the development of digital radiography and types of digital radiography systems. Aspects of image quality and radiation exposure control are highlighted as well. In addition, the article includes related workplace changes and medicolegal considerations in the digital radiography environment. PMID:27601691

  18. Holographic projection with higher image quality.

    PubMed

    Qu, Weidong; Gu, Huarong; Tan, Qiaofeng

    2016-08-22

    The spatial resolution limited by the size of the spatial light modulator (SLM) in the holographic projection can hardly be increased, and speckle noise always appears to induce the degradation of image quality. In this paper, the holographic projection with higher image quality is presented. The spatial resolution of the reconstructed image is 2 times of that of the existing holographic projection, and speckles are suppressed well at the same time. Finally, the effectiveness of the holographic projection is verified in experiments. PMID:27557197

  19. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  20. Color image attribute and quality measurements

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Panetta, Karen; Agaian, Sos

    2014-05-01

    Color image quality measures have been used for many computer vision tasks. In practical applications, the no-reference (NR) measures are desirable because reference images are not always accessible. However, only limited success has been achieved. Most existing NR quality assessments require that the types of image distortion is known a-priori. In this paper, three NR color image attributes: colorfulness, sharpness and contrast are quantified by new metrics. Using these metrics, a new Color Quality Measure (CQM), which is based on the linear combination of these three color image attributes, is presented. We evaluated the performance of several state-of-the-art no-reference measures for comparison purposes. Experimental results demonstrate the CQM correlates well with evaluations obtained from human observers and it operates in real time. The results also show that the presented CQM outperforms previous works with respect to ranking image quality among images containing the same or different contents. Finally, the performance of CQM is independent of distortion types, which is demonstrated in the experimental results.

  1. Computerized measurement of mammographic display image quality

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.; Sivarudrappa, Mahesh; Roehrig, Hans

    1999-05-01

    Since the video monitor is widely believed to be the weak link in the imaging chain, it is critical, to include it in the total image quality evaluation. Yet, most physical measurements of mammographic image quality are presently limited to making measurements on the digital matrix, not the displayed image. A method is described to quantitatively measure image quality of mammographic monitors using ACR phantom-based test patterns. The image of the test pattern is digitized using a charge coupled device (CCD) camera, and the resulting image file is analyzed by an existing phantom analysis method (Computer Analysis of Mammography Phantom Images, CAMPI). The new method is called CCD-CAMPI and it yields the Signal-to-Noise-Ratio (SNR) for an arbitrary target shape (e.g., speck, mass or fiber). In this work we show the feasibility of this idea for speck targets. Also performed were physical image quality characterization of the monitor (so-called Fourier measures) and analysis by another template matching method due to Tapiovaara and Wagner (TW) which is closely related to CAMPI. The methods were applied to a MegaScan monitor. Test patterns containing a complete speck group superposed on a noiseless background were displayed on the monitor and a series of CCD images were acquired. These images were subjected to CCD-CAMPI and TW analyses. It was found that the SNR values for the CCD-CAMPI method tracked those of the TW method, although the latter measurements were considerably less precise. The TW SNR measure was also about 25% larger than the CCD-CAMPI determination. These differences could be understood from the manner in which the two methods evaluate the noise. Overall accuracy of the CAMPI SNR determination was 4.1% for single images when expressed as a coefficient of variance. While the SNR measures are predictable from the Fourier measures the number of images and effort required is prohibitive and it is not suited to Quality Control (QC). Unlike the Fourier

  2. A database for spectral image quality

    NASA Astrophysics Data System (ADS)

    Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve

    2015-01-01

    We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.

  3. Blind image quality assessment through anisotropy.

    PubMed

    Gabarda, Salvador; Cristóbal, Gabriel

    2007-12-01

    We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Rényi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, in-focus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio. PMID:18059913

  4. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  5. Lessions learned in WISE image quality

    NASA Astrophysics Data System (ADS)

    Kendall, Martha; Duval, Valerie G.; Larsen, Mark F.; Heinrichsen, Ingolf H.; Esplin, Roy W.; Shannon, Mark; Wright, Edward L.

    2010-08-01

    The Wide-Field Infrared Survey Explorer (WISE) mission launched in December of 2009 is a true success story. The mission is performing beyond expectations on-orbit and maintained cost and schedule throughout. How does such a thing happen? A team constantly focused on mission success is a key factor. Mission success is more than a program meeting its ultimate science goals; it is also meeting schedule and cost goals to avoid cancellation. The WISE program can attribute some of its success in achieving the image quality needed to meet science goals to lessons learned along the way. A requirement was missed in early decomposition, the absence of which would have adversely affected end-to-end system image quality. Fortunately, the ability of the cross-organizational team to focus on fixing the problem without pointing fingers or waiting for paperwork was crucial in achieving a timely solution. Asking layman questions early in the program could have revealed requirement flowdown misunderstandings between spacecraft control stability and image processing needs. Such is the lesson learned with the WISE spacecraft Attitude Determination & Control Subsystem (ADCS) jitter control and the image data reductions needs. Spacecraft motion can affect image quality in numerous ways. Something as seemingly benign as different terminology being used by teammates in separate groups working on data reduction, spacecraft ADCS, the instrument, mission operations, and the science proved to be a risk to system image quality. While the spacecraft was meeting the allocated jitter requirement , the drift rate variation need was not being met. This missing need was noticed about a year before launch and with a dedicated team effort, an adjustment was made to the spacecraft ADCS control. WISE is meeting all image quality requirements on-orbit thanks to a diligent team noticing something was missing before it was too late and applying their best effort to find a solution.

  6. Assessment of adequate quality and collocation of reference measurements with space-borne hyperspectral infrared instruments to validate retrievals of temperature and water vapour

    NASA Astrophysics Data System (ADS)

    Calbet, X.

    2016-01-01

    A method is presented to assess whether a given reference ground-based point observation, typically a radiosonde measurement, is adequately collocated and sufficiently representative of space-borne hyperspectral infrared instrument measurements. Once this assessment is made, the ground-based data can be used to validate and potentially calibrate, with a high degree of accuracy, the hyperspectral retrievals of temperature and water vapour.

  7. Subjective matters: from image quality to image psychology

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; De Ridder, Huib

    2013-03-01

    From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.

  8. Measuring image quality in overlapping areas of panoramic composed images

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Escofet, Jaume

    2012-06-01

    Several professional photographic applications uses the merging of consecutive overlapping images in order to obtain bigger files by means of stitching techniques or extended field of view (FOV) for panoramic images. All of those applications share the fact that the final composed image is obtained by overlapping the neighboring areas of consecutive individual images taken as a mosaic or a series of tiles over the scene, from the same point of view. Any individual image taken with a given lens can carry residual aberrations and several of them will affect more probably the borders of the image frame. Furthermore, the amount of distortion aberration present in the images of a given lens will be reversed in position for the two overlapping areas of a pair of consecutive takings. Finally, the different images used in composing the final one have corresponding overlapping areas taken with different perspective. From all the previously stated can be derived that the software employed must remap all the pixel information in order to resize and match image features in those overlapping areas, providing a final composed image with the desired perspective projection. The work presented analyse two panoramic format images taken with a pair of lenses and composed by means of a state of the art stitching software. Then, a series of images are taken to cover an FOV three times the original lens FOV, the images are merged by means of a software of common use in professional panoramic photography and the final image quality is evaluated through a series of targets positioned in strategic locations over the whole taking field of view. That allows measuring the resulting Resolution and Modulation Transfer Function (MTF). The results are shown compared with the previous measures on the original individual images.

  9. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  10. Geometric assessment of image quality using digital image registration techniques

    NASA Technical Reports Server (NTRS)

    Tisdale, G. E.

    1976-01-01

    Image registration techniques were developed to perform a geometric quality assessment of multispectral and multitemporal image pairs. Based upon LANDSAT tapes, accuracies to a small fraction of a pixel were demonstrated. Because it is insensitive to the choice of registration areas, the technique is well suited to performance in an automatic system. It may be implemented at megapixel-per-second rates using a commercial minicomputer in combination with a special purpose digital preprocessor.

  11. Image quality measures and their performance

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.; Chen, Si-Yuan

    1994-01-01

    A number of quality measures are evaluated for gray scale image compression. They are all bivariate exploiting the differences between corresponding pixels in the original and degraded images. It is shown that although some numerical measures correlate well with the observers' response for a given compression technique, they are not reliable for an evaluation across different techniques. The two graphical measures (histograms and Hosaka plots), however, can be used to appropriately specify not only the amount, but also the type of degradation in reconstructed images.

  12. Quality evaluation of fruit by hyperspectral imaging

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter presents new applications of hyperspectral imaging for measuring the optical properties of fruits and assessing their quality attributes. A brief overview is given of current techniques for measuring optical properties of turbid and opaque biological materials. Then a detailed descripti...

  13. Image Quality Indicator for Infrared Inspections

    NASA Technical Reports Server (NTRS)

    Burke, Eric

    2011-01-01

    The quality of images generated during an infrared thermal inspection depends on many system variables, settings, and parameters to include the focal length setting of the IR camera lens. If any relevant parameter is incorrect or sub-optimal, the resulting IR images will usually exhibit inherent unsharpness and lack of resolution. Traditional reference standards and image quality indicators (IQIs) are made of representative hardware samples and contain representative flaws of concern. These standards are used to verify that representative flaws can be detected with the current IR system settings. However, these traditional standards do not enable the operator to quantify the quality limitations of the resulting images, i.e. determine the inherent maximum image sensitivity and image resolution. As a result, the operator does not have the ability to optimize the IR inspection system prior to data acquisition. The innovative IQI described here eliminates this limitation and enables the operator to objectively quantify and optimize the relevant variables of the IR inspection system, resulting in enhanced image quality with consistency and repeatability in the inspection application. The IR IQI consists of various copper foil features of known sizes that are printed on a dielectric non-conductive board. The significant difference in thermal conductivity between the two materials ensures that each appears with a distinct grayscale or brightness in the resulting IR image. Therefore, the IR image of the IQI exhibits high contrast between the copper features and the underlying dielectric board, which is required to detect the edges of the various copper features. The copper features consist of individual elements of various shapes and sizes, or of element-pairs of known shapes and sizes and with known spacing between the elements creating the pair. For example, filled copper circles with various diameters can be used as individual elements to quantify the image sensitivity

  14. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  15. Analysis of image quality based on perceptual preference

    NASA Astrophysics Data System (ADS)

    Xue, Liqin; Hua, Yuning; Zhao, Guangzhou; Qi, Yaping

    2007-11-01

    This paper deals with image quality analysis considering the impact of psychological factors involved in assessment. The attributes of image quality requirement were partitioned according to the visual perception characteristics and the preference of image quality were obtained by the factor analysis method. The features of image quality which support the subjective preference were identified, The adequacy of image is evidenced to be the top requirement issues to the display image quality improvement. The approach will be beneficial to the research of the image quality subjective quantitative assessment method.

  16. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  17. Taking image quality factor into the OPC model tuning flow

    NASA Astrophysics Data System (ADS)

    Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo

    2007-03-01

    All OPC model builders are in search of a physically realistic model that is adequately calibrated and contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process and wafer data sets are not perfect. Most cases even using the average values of different empirical data sets will still take inaccurate measurements into the model fitting process, which makes the fitting process more time consuming and also may cause losing convergence and stability. The Image quality is one of the most worrisome obstacles faced by next-generation lithography. Nowadays, considerable effort is devoted to enhance the contrast, as well as understanding its impact on devices. It is a persistent problem for 193nm micro-lithography and will carry us for at least three generations, culminating with immersion lithography. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the Normal image log slope (NILS), which can reflect the image quality. Using this approach, we can filter wrong information of the process and make the OPC model more accurate. CalibreWorkbench is the platform we used in this study, which has been proven to have an excellent performance on 0.13um, 90nm and 65nm production and development models setup. Leveraging its automatic optical-tuning function, we practiced the best weighting approach to achieve the most efficient and convergent tuning flow.

  18. Physical measures of image quality in mammography

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.

    1996-04-01

    A recently introduced method for quantitative analysis of images of the American College of Radiology (ACR) mammography accreditation phantom has been extended to include signal- to-noise-ratio (SNR) measurements, and has been applied to survey the image quality of 54 mammography machines from 17 hospitals. Participants sent us phantom images to be evaluated for each mammography machine at their hospital. Each phantom was loaned to us for obtaining images of the wax insert plate on a reference machine at our institution. The images were digitized and analyzed to yield indices that quantified the image quality of the machines precisely. We have developed methods for normalizing for the variation of the individual speck sizes between different ACR phantoms, for the variation of the speck sizes within a microcalcification group, and for variations in overall speeds of the mammography systems. In terms of the microcalcification SNR, the variability of the x-ray machines was 40.5% when no allowance was made for phantom or mAs variations. This dropped to 17.1% when phantom variability was accounted for, and to 12.7% when mAs variability was also allowed for. Our work shows the feasibility of practical, low-cost, objective and accurate evaluations, as a useful adjunct to the present ACR method.

  19. Naturalness and interestingness of test images for visual quality evaluation

    NASA Astrophysics Data System (ADS)

    Halonen, Raisa; Westman, Stina; Oittinen, Pirkko

    2011-01-01

    Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.

  20. Quantitative statistical methods for image quality assessment.

    PubMed

    Dutta, Joyita; Ahn, Sangtae; Li, Quanzheng

    2013-01-01

    Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit). PMID:24312148

  1. Quantitative Statistical Methods for Image Quality Assessment

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Quanzheng

    2013-01-01

    Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit). PMID:24312148

  2. No-reference image quality metric based on image classification

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Lee, Chulhee

    2011-12-01

    In this article, we present a new no-reference (NR) objective image quality metric based on image classification. We also propose a new blocking metric and a new blur metric. Both metrics are NR metrics since they need no information from the original image. The blocking metric was computed by considering that the visibility of horizontal and vertical blocking artifacts can change depending on background luminance levels. When computing the blur metric, we took into account the fact that blurring in edge regions is generally more sensitive to the human visual system. Since different compression standards usually produce different compression artifacts, we classified images into two classes using the proposed blocking metric: one class that contained blocking artifacts and another class that did not contain blocking artifacts. Then, we used different quality metrics based on the classification results. Experimental results show that each metric correlated well with subjective ratings, and the proposed NR image quality metric consistently provided good performance with various types of content and distortions.

  3. Agreement between objective and subjective assessment of image quality in ultrasound abdominal aortic aneurism screening

    PubMed Central

    Wolstenhulme, S; Keeble, C; Moore, S; Evans, J A

    2015-01-01

    Objective: To investigate agreement between objective and subjective assessment of image quality of ultrasound scanners used for abdominal aortic aneurysm (AAA) screening. Methods: Nine ultrasound scanners were used to acquire longitudinal and transverse images of the abdominal aorta. 100 images were acquired per scanner from which 5 longitudinal and 5 transverse images were randomly selected. 33 practitioners scored 90 images blinded to the scanner type and subject characteristics and were required to state whether or not the images were of adequate diagnostic quality. Odds ratios were used to rank the subjective image quality of the scanners. For objective testing, three standard test objects were used to assess penetration and resolution and used to rank the scanners. Results: The subjective diagnostic image quality was ten times greater for the highest ranked scanner than for the lowest ranked scanner. It was greater at depths of <5.0 cm (odds ratio, 6.69; 95% confidence interval, 3.56, 12.57) than at depths of 15.1–20.0 cm. There was a larger range of odds ratios for transverse images than for longitudinal images. No relationship was seen between subjective scanner rankings and test object scores. Conclusion: Large variation was seen in the image quality when evaluated both subjectively and objectively. Objective scores did not predict subjective scanner rankings. Further work is needed to investigate the utility of both subjective and objective image quality measurements. Advances in knowledge: Ratings of clinical image quality and image quality measured using test objects did not agree, even in the limited scenario of AAA screening. PMID:25494526

  4. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  5. Image registration for DSA quality enhancement.

    PubMed

    Buzug, T M; Weese, J

    1998-01-01

    A generalized framework for histogram-based similarity measures is presented and applied to the image-enhancement task in digital subtraction angiography (DSA). The class of differentiable, strictly convex weighting functions is identified as suitable weightings of histograms for measuring the degree of clustering that goes along with registration. With respect to computation time, the energy similarity measure is the function of choice for the registration of mask and contrast image prior to subtraction. The robustness of the energy measure is studied for geometrical image distortions like rotation and scaling. Additionally, it is investigated how the histogram binning and inhomogeneous motion inside the templates influence the quality of the similarity measure. Finally, the registration success for the automated procedure is compared with the manually shift-corrected image pair of the head. PMID:9719851

  6. Dried fruits quality assessment by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  7. Retinal image quality in the rodent eye.

    PubMed

    Artal, P; Herreros de Tejada, P; Muñoz Tedó, C; Green, D G

    1998-01-01

    Many rodents do not see well. For a target to be resolved by a rat or a mouse, it must subtend a visual angle of a degree or more. It is commonly assumed that this poor spatial resolving capacity is due to neural rather than optical limitations, but the quality of the retinal image has not been well characterized in these animals. We have modified a double-pass apparatus, initially designed for the human eye, so it could be used with rodents to measure the modulation transfer function (MTF) of the eye's optics. That is, the double-pass retinal image of a monochromatic (lambda = 632.8 nm) point source was digitized with a CCD camera. From these double-pass measurements, the single-pass MTF was computed under a variety of conditions of focus and with different pupil sizes. Even with the eye in best focus, the image quality in both rats and mice is exceedingly poor. With a 1-mm pupil, for example, the MTF in the rat had an upper limit of about 2.5 cycles/deg, rather than the 28 cycles/deg one would obtain if the eye were a diffraction-limited system. These images are about 10 times worse than the comparable retinal images in the human eye. Using our measurements of the optics and the published behavioral and electrophysiological contrast sensitivity functions (CSFs) of rats, we have calculated the CSF that the rat would have if it had perfect rather than poor optics. We find, interestingly, that diffraction-limited optics would produce only slight improvement overall. That is, in spite of retinal images which are of very low quality, the upper limit of visual resolution in rodents is neurally determined. Rats and mice seem to have eyes in which the optics and retina/brain are well matched. PMID:9682864

  8. Image quality assessment and human visual system

    NASA Astrophysics Data System (ADS)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2010-07-01

    This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.

  9. Retinal image quality, reading and myopia.

    PubMed

    Collins, Michael J; Buehren, Tobias; Iskander, D Robert

    2006-01-01

    Analysis was undertaken of the retinal image characteristics of the best-spectacle corrected eyes of progressing myopes (n = 20, mean age = 22 years; mean spherical equivalent = -3.84 D) and a control group of emmetropes (n = 20, mean age = 23 years; mean spherical equivalent = 0.00 D) before and after a 2h reading task. Retinal image quality was calculated based upon wavefront measurements taken with a Hartmann-Shack sensor with fixation on both a far (5.5 m) and near (individual reading distance) target. The visual Strehl ratio based on the optical transfer function (VSOTF) was significantly worse for the myopes prior to reading for both the far (p = 0.01) and near (p = 0.03) conditions. The myopic group showed significant reductions in various aspects of retinal image quality compared with the emmetropes, involving components of the modulation transfer function, phase transfer function and point spread function, often along the vertical meridian of the eye. The depth of focus of the myopes (0.54 D) was larger (p = 0.02) than the emmetropes (0.42 D) and the distribution of refractive power (away from optimal sphero-cylinder) was greater in the myopic eyes (variance of distributions p < 0.05). We found evidence that the lead and lag of accommodation are influenced by the higher order aberrations of the eye (e.g. significant correlations between lead/lag and the peak of the visual Strehl ratio based on the MTF). This could indicate that the higher accommodation lags seen in myopes are providing optimized retinal image characteristics. The interaction between low and high order aberrations of the eye play a significant role in reducing the retinal image quality of myopic eyes compared with emmetropes. PMID:15913701

  10. Towards real-time image quality assessment

    NASA Astrophysics Data System (ADS)

    Geary, Bobby; Grecos, Christos

    2011-03-01

    We introduce a real-time implementation and evaluation of a new fast accurate full reference based image quality metric. The popular general image quality metric known as the Structural Similarity Index Metric (SSIM) has been shown to be an effective, efficient and useful, finding many practical and theoretical applications. Recently the authors have proposed an enhanced version of the SSIM algorithm known as the Rotated Gaussian Discrimination Metric (RGDM). This approach uses a Gaussian-like discrimination function to evaluate local contrast and luminance. RGDM was inspired by an exploration of local statistical parameter variations in relation to variation of Mean Opinion Score (MOS) for a range of particular distortion types. In this paper we out-line the salient features of the derivation of RGDM and show how analyses of local statistics of distortion type necessitate variation in discrimination function width. Results on the LIVE image database show tight banding of RGDM metric value when plotted against mean opinion score indicating the usefulness of this metric. We then explore a number of strategies for algorithmic speed-up including the application of Integral Images for patch based computation optimisation, cost reduction for the evaluation of the discrimination function and general loop unrolling. We also employ fast Single Instruction Multiple Data (SIMD) intrinsics and explore data parallel decomposition on a multi-core Intel Processor.

  11. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  12. Damage and quality assessment in wheat by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Delwiche, Stephen R.; Kim, Moon S.; Dong, Yanhong

    2010-04-01

    Fusarium head blight is a fungal disease that affects the world's small grains, such as wheat and barley. Attacking the spikelets during development, the fungus causes a reduction of yield and grain of poorer processing quality. It also is a health concern because of the secondary metabolite, deoxynivalenol, which often accompanies the fungus. While chemical methods exist to measure the concentration of the mycotoxin and manual visual inspection is used to ascertain the level of Fusarium damage, research has been active in developing fast, optically based techniques that can assess this form of damage. In the current study a near-infrared (1000-1700 nm) hyperspectral image system was assembled and applied to Fusarium-damaged kernel recognition. With anticipation of an eventual multispectral imaging system design, 5 wavelengths were manually selected from a pool of 146 images as the most promising, such that when combined in pairs or triplets, Fusarium damage could be identified. We present the results of two pairs of wavelengths [(1199, 1474 nm) and (1315, 1474 nm)] whose reflectance values produced adequate separation of kernels of healthy appearance (i.e., asymptomatic condition) from kernels possessing Fusarium damage.

  13. Standardised mortality ratio based on the sum of age and percentage total body surface area burned is an adequate quality indicator in burn care: An exploratory review.

    PubMed

    Steinvall, Ingrid; Elmasry, Moustafa; Fredrikson, Mats; Sjoberg, Folke

    2016-02-01

    Standardised Mortality Ratio (SMR) based on generic mortality predicting models is an established quality indicator in critical care. Burn-specific mortality models are preferred for the comparison among patients with burns as their predictive value is better. The aim was to assess whether the sum of age (years) and percentage total body surface area burned (which constitutes the Baux score) is acceptable in comparison to other more complex models, and to find out if data collected from a separate burn centre are sufficient for SMR based quality assessment. The predictive value of nine burn-specific models was tested by comparing values from the area under the receiver-operating characteristic curve (AUC) and a non-inferiority analysis using 1% as the limit (delta). SMR was analysed by comparing data from seven reference sources, including the North American National Burn Repository (NBR), with the observed mortality (years 1993-2012, n=1613, 80 deaths). The AUC values ranged between 0.934 and 0.976. The AUC 0.970 (95% CI 0.96-0.98) for the Baux score was non-inferior to the other models. SMR was 0.52 (95% CI 0.28-0.88) for the most recent five-year period compared with NBR based data. The analysis suggests that SMR based on the Baux score is eligible as an indicator of quality for setting standards of mortality in burn care. More advanced modelling only marginally improves the predictive value. The SMR can detect mortality differences in data from a single centre. PMID:26700877

  14. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  15. The mobile image quality survey game

    NASA Astrophysics Data System (ADS)

    Rasmussen, D. René

    2012-01-01

    In this paper we discuss human assessment of the quality of photographic still images, that are degraded in various manners relative to an original, for example due to compression or noise. In particular, we examine and present results from a technique where observers view images on a mobile device, perform pairwise comparisons, identify defects in the images, and interact with the display to indicate the location of the defects. The technique measures the response time and accuracy of the responses. By posing the survey in a form similar to a game, providing performance feedback to the observer, the technique attempts to increase the engagement of the observers, and to avoid exhausting observers, a factor that is often a problem for subjective surveys. The results are compared with the known physical magnitudes of the defects and with results from similar web-based surveys. The strengths and weaknesses of the technique are discussed. Possible extensions of the technique to video quality assessment are also discussed.

  16. Hyperspectral and multispectral imaging for evaluating food safety and quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spectral imaging technologies have been developed rapidly during the past decade. This paper presents hyperspectral and multispectral imaging technologies in the area of food safety and quality evaluation, with an introduction, demonstration, and summarization of the spectral imaging techniques avai...

  17. On pictures and stuff: image quality and material appearance

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2014-02-01

    Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.

  18. Do English NHS Microbiology laboratories offer adequate services for the diagnosis of UTI in children? Healthcare Quality Improvement Partnership (HQIP) Audit of Standard Operational Procedures.

    PubMed

    McNulty, Cliodna A M; Verlander, Neville Q; Moore, Philippa C L; Larcombe, James; Dudley, Jan; Banerjee, Jaydip; Jadresic, Lyda

    2015-09-01

    The National Institute of Care Excellence (NICE) 2007 guidance CG54, on urinary tract infection (UTI) in children, states that clinicians should use urgent microscopy and culture as the preferred method for diagnosing UTI in the hospital setting for severe illness in children under 3 years old and from the GP setting in children under 3 years old with intermediate risk of severe illness. NICE also recommends that all 'infants and children with atypical UTI (including non-Escherichia coli infections) should have renal imaging after a first infection'. We surveyed all microbiology laboratories in England with Clinical Pathology Accreditation to determine standard operating procedures (SOPs) for urgent microscopy, culture and reporting of children's urine and to ascertain whether the SOPs facilitate compliance with NICE guidance. We undertook a computer search in six microbiology laboratories in south-west England to determine urine submissions and urine reports in children under 3 years. Seventy-three per cent of laboratories (110/150) participated. Enterobacteriaceae that were not E. coli were reported only as coliforms (rather than non-E. coli coliforms) by 61% (67/110) of laboratories. Eighty-eight per cent of laboratories (97/110) provided urgent microscopy for hospital and 54% for general practice (GP) paediatric urines; 61% of laboratories (confidence interval 52-70%) cultured 1 μl volume of urine, which equates to one colony if the bacterial load is 106 c.f.u. l(-1). Only 22% (24/110) of laboratories reported non-E. coli coliforms and provided urgent microscopy for both hospital and GP childhood urines; only three laboratories also cultured a 5 μl volume of urine. Only one of six laboratories in our submission audit had a significant increase in urine submissions and urines reported from children less than 3 years old between the predicted pre-2007 level in the absence of guidance and the 2008 level following publication of the NICE guidance. Less than a

  19. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  20. Stereoscopic image quality assessment using disparity-compensated view filtering

    NASA Astrophysics Data System (ADS)

    Song, Yang; Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2016-03-01

    Stereoscopic image quality assessment (IQA) plays a vital role in stereoscopic image/video processing systems. We propose a new quality assessment for stereoscopic image that uses disparity-compensated view filtering (DCVF). First, because a stereoscopic image is composed of different frequency components, DCVF is designed to decompose it into high-pass and low-pass components. Then, the qualities of different frequency components are acquired according to their phase congruency and coefficient distribution characteristics. Finally, support vector regression is utilized to establish a mapping model between the component qualities and subjective qualities, and stereoscopic image quality is calculated using this mapping model. Experiments on the LIVE 3-D IQA database and NBU 3-D IQA databases demonstrate that the proposed method can evaluate stereoscopic image quality accurately. Compared with several state-of-the-art quality assessment methods, the proposed method is more consistent with human perception.

  1. Finger vein image quality evaluation using support vector machines

    NASA Astrophysics Data System (ADS)

    Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2013-02-01

    In an automatic finger-vein recognition system, finger-vein image quality is significant for segmentation, enhancement, and matching processes. In this paper, we propose a finger-vein image quality evaluation method using support vector machines (SVMs). We extract three features including the gradient, image contrast, and information capacity from the input image. An SVM model is built on the training images with annotated quality labels (i.e., high/low) and then applied to unseen images for quality evaluation. To resolve the class-imbalance problem in the training data, we perform oversampling for the minority class with random-synthetic minority oversampling technique. Cross-validation is also employed to verify the reliability and stability of the learned model. Our experimental results show the effectiveness of our method in evaluating the quality of finger-vein images, and by discarding low-quality images detected by our method, the overall finger-vein recognition performance is considerably improved.

  2. Image quality metrics for optical coherence angiography.

    PubMed

    Lozzi, Andrea; Agrawal, Anant; Boretsky, Adam; Welle, Cristin G; Hammer, Daniel X

    2015-07-01

    We characterized image quality in optical coherence angiography (OCA) en face planes of mouse cortical capillary network in terms of signal-to-noise ratio (SNR) and Weber contrast (Wc) through a novel mask-based segmentation method. The method was used to compare two adjacent B-scan processing algorithms, (1) average absolute difference (AAD) and (2) standard deviation (SD), while varying the number of lateral cross-sections acquired (also known as the gate length, N). AAD and SD are identical at N = 2 and exhibited similar image quality for N<10. However, AAD is relatively less susceptible to bulk tissue motion artifact than SD. SNR and Wc were 15% and 35% higher for AAD from N = 25 to 100. In addition data sets were acquired with two objective lenses with different magnifications to quantify the effect of lateral resolution on fine capillary detection. The lower power objective yielded a significant mean broadening of 17% in Full Width Half Maximum (FWHM) diameter. These results may guide study and device designs for OCA capillary and blood flow quantification. PMID:26203372

  3. Quality Control of Diffusion Weighted Images

    PubMed Central

    Liu, Zhexing; Wang, Yi; Gerig, Guido; Gouttard, Sylvain; Tao, Ran; Fletcher, Thomas; Styner, Martin

    2013-01-01

    Diffusion Tensor Imaging (DTI) has become an important MRI procedure to investigate the integrity of white matter in brain in vivo. DTI is estimated from a series of acquired Diffusion Weighted Imaging (DWI) volumes. DWI data suffers from inherent low SNR, overall long scanning time of multiple directional encoding with correspondingly large risk to encounter several kinds of artifacts. These artifacts can be too severe for a correct and stable estimation of the diffusion tensor. Thus, a quality control (QC) procedure is absolutely necessary for DTI studies. Currently, routine DTI QC procedures are conducted manually by visually checking the DWI data set in a gradient by gradient and slice by slice way. The results often suffer from low consistence across different data sets, lack of agreement of different experts, and difficulty to judge motion artifacts by qualitative inspection. Additionally considerable manpower is needed for this step due to the large number of images to QC, which is common for group comparison and longitudinal studies, especially with increasing number of diffusion gradient directions. We present a framework for automatic DWI QC. We developed a tool called DTIPrep which pipelines the QC steps with a detailed protocoling and reporting facility. And it is fully open source. This framework/tool has been successfully applied to several DTI studies with several hundred DWIs in our lab as well as collaborating labs in Utah and Iowa. In our studies, the tool provides a crucial piece for robust DTI analysis in brain white matter study. PMID:24353379

  4. Image Quality Characteristics of Handheld Display Devices for Medical Imaging

    PubMed Central

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2×10−5 mm2 at 1 mm−1, while handheld displays have values lower than 3.7×10−6 mm2. Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  5. Quality assessment for spectral domain optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Paranjape, Amit S.; Elmaanaoui, Badr; Dewelle, Jordan; Rylander, H. Grady, III; Markey, Mia K.; Milner, Thomas E.

    2009-02-01

    Retinal nerve fiber layer (RNFL) thickness, a measure of glaucoma progression, can be measured in images acquired by spectral domain optical coherence tomography (OCT). The accuracy of RNFL thickness estimation, however, is affected by the quality of the OCT images. In this paper, a new parameter, signal deviation (SD), which is based on the standard deviation of the intensities in OCT images, is introduced for objective assessment of OCT image quality. Two other objective assessment parameters, signal to noise ratio (SNR) and signal strength (SS), are also calculated for each OCT image. The results of the objective assessment are compared with subjective assessment. In the subjective assessment, one OCT expert graded the image quality according to a three-level scale (good, fair, and poor). The OCT B-scan images of the retina from six subjects are evaluated by both objective and subjective assessment. From the comparison, we demonstrate that the objective assessment successfully differentiates between the acceptable quality images (good and fair images) and poor quality OCT images as graded by OCT experts. We evaluate the performance of the objective assessment under different quality assessment parameters and demonstrate that SD is the best at distinguishing between fair and good quality images. The accuracy of RNFL thickness estimation is improved significantly after poor quality OCT images are rejected by automated objective assessment using the SD, SNR, and SS.

  6. The influence of statistical variations on image quality

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror; Hertel, Dirk; Bullitt, Julian

    2006-01-01

    For more than thirty years imaging scientists have constructed metrics to predict psychovisually perceived image quality. Such metrics are based on a set of objectively measurable basis functions such as Noise Power Spectrum (NPS), Modulation Transfer Function (MTF), and characteristic curves of tone and color reproduction. Although these basis functions constitute a set of primitives that fully describe an imaging system from the standpoint of information theory, we found that in practical imaging systems the basis functions themselves are determined by system-specific primitives, i.e. technology parameters. In the example of a printer, MTF and NPS are largely determined by dot structure. In addition MTF is determined by color registration, and NPS by streaking and banding. Since any given imaging system is only a single representation of a class of more or less identical systems, the family of imaging systems and the single system are not described by a unique set of image primitives. For an image produced by a given imaging system, the set of image primitives describing that particular image will be a singular instantiation of the underlying statistical distribution of that primitive. If we know precisely the set of imaging primitives that describe the given image we should be able to predict its image quality. Since only the distributions are known, we can only predict the distribution in image quality for a given image as produced by the larger class of 'identical systems'. We will demonstrate the combinatorial effect of the underlying statistical variations in the image primitives on the objectively measured image quality of a population of printers as well as on the perceived image quality of a set of test images. We also will discuss the choice of test image sets and impact of scene content on the distribution of perceived image quality.

  7. Using short-wave infrared imaging for fruit quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    Quality evaluation of agricultural and food products is important for processing, inventory control, and marketing. Fruit size and surface quality are two important quality factors for high-quality fruit such as Medjool dates. Fruit size is usually measured by length that can be done easily by simple image processing techniques. Surface quality evaluation on the other hand requires more complicated design, both in image acquisition and image processing. Skin delamination is considered a major factor that affects fruit quality and its value. This paper presents an efficient histogram analysis and image processing technique that is designed specifically for real-time surface quality evaluation of Medjool dates. This approach, based on short-wave infrared imaging, provides excellent image contrast between the fruit surface and delaminated skin, which allows significant simplification of image processing algorithm and reduction of computational power requirements. The proposed quality grading method requires very simple training procedure to obtain a gray scale image histogram for each quality level. Using histogram comparison, each date is assigned to one of the four quality levels and an optimal threshold is calculated for segmenting skin delamination areas from the fruit surface. The percentage of the fruit surface that has skin delamination can then be calculated for quality evaluation. This method has been implemented and used for commercial production and proven to be efficient and accurate.

  8. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    PubMed

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  9. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  10. Learning to rank for blind image quality assessment.

    PubMed

    Gao, Fei; Tao, Dacheng; Gao, Xinbo; Li, Xuelong

    2015-10-01

    Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. PMID:25616080

  11. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  12. Retinal Image Quality during Accommodation in Adult Myopic Eyes

    PubMed Central

    Sreenivasan, Vidhyapriya; Aslakson, Emily; Kornaus, Andrew; Thibos, Larry N.

    2014-01-01

    Purpose Reduced retinal image contrast produced by accommodative lag is implicated with myopia development. Here, we measure accommodative error and retinal image quality from wavefront aberrations in myopes and emmetropes when they perform visually demanding and naturalistic tasks. Methods Wavefront aberrations were measured in 10 emmetropic and 11 myopic adults at three distances (100, 40, and 20 cm) while performing four tasks (monocular acuity, binocular acuity, reading, and movie watching). For the acuity tasks, measurements of wavefront error were obtained near the end point of the acuity experiment. Refractive state was defined as the target vergence that optimizes image quality using a visual contrast metric (VSMTF) computed from wavefront errors. Results Accommodation was most accurate (and image quality best) during binocular acuity whereas accommodation was least accurate (and image quality worst) while watching a movie. When viewing distance was reduced, accommodative lag increased and image quality (as quantified by VSMTF) declined for all tasks in both refractive groups. For any given viewing distance, computed image quality was consistently worse in myopes than in emmetropes, more so for the acuity than for reading/movie watching. Although myopes showed greater lags and worse image quality for the acuity experiments compared to emmetropes, acuity was not measurably worse in myopes compared to emmetropes. Conclusions Retinal image quality present when performing a visually demanding task (e.g., during clinical examination) is likely to be greater than for less demanding tasks (e.g., reading/movie watching). Although reductions in image quality lead to reductions in acuity, the image quality metric VSMTF is not necessarily an absolute indicator of visual performance because myopes achieved slightly better acuity than emmetropes despite showing greater lags and worse image quality. Reduced visual contrast in myopes compared to emmetropes is consistent

  13. Perceptual Quality Assessment for Multi-Exposure Image Fusion.

    PubMed

    Ma, Kede; Zeng, Kai; Wang, Zhou

    2015-11-01

    Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency. Our experimental results on the subjective database show that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion. Finally, we demonstrate the potential application of the proposed model by automatically tuning the parameters of MEF algorithms. PMID:26068317

  14. Quantitative image quality evaluation for cardiac CT reconstructions

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.

    2016-03-01

    Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.

  15. Contrast sensitivity function calibration based on image quality prediction

    NASA Astrophysics Data System (ADS)

    Han, Yu; Cai, Yunze

    2014-11-01

    Contrast sensitivity functions (CSFs) describe visual stimuli based on their spatial frequency. However, CSF calibration is limited by the size of the sample collection and this remains an open issue. In this study, we propose an approach for calibrating CSFs that is based on the hypothesis that a precise CSF model can accurately predict image quality. Thus, CSF calibration is regarded as the inverse problem of image quality prediction according to our hypothesis. A CSF could be calibrated by optimizing the performance of a CSF-based image quality metric using a database containing images with known quality. Compared with the traditional method, this would reduce the work involved in sample collection dramatically. In the present study, we employed three image databases to optimize some existing CSF models. The experimental results showed that the performance of a three-parameter CSF model was better than that of other models. The results of this study may be helpful in CSF and image quality research.

  16. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  17. Automated FMV image quality assessment based on power spectrum statistics

    NASA Astrophysics Data System (ADS)

    Kalukin, Andrew

    2015-05-01

    Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).

  18. The study of surgical image quality evaluation system by subjective quality factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard

    2016-03-01

    GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.

  19. Is image quality a function of contrast perception?

    NASA Astrophysics Data System (ADS)

    Haun, Andrew M.; Peli, Eli

    2013-03-01

    In this retrospective we trace in broad strokes the development of image quality measures based on the study of the early stages of the human visual system (HVS), where contrast encoding is fundamental. We find that while presenters at the Human Vision and Electronic Imaging meetings have frequently strived to find points of contact between the study of human contrast psychophysics and the development of computer vision and image quality algorithms. Progress has not always been made on these terms, although indirect impact of vision science on more recent image quality metrics can be observed.

  20. New image quality assessment method using wavelet leader pyramids

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolin; Yang, Xiaokang; Zheng, Shibao; Lin, Weiyao; Zhang, Rui; Zhai, Guangtao

    2011-06-01

    In this paper, we propose a wave leader pyramids based Visual Information Fidelity method for image quality assessment. Motivated by the observations that the human vision systems (HVS) are more sensitive to edge and contour regions and that the human visual sensitivity varies with spatial frequency, we first introduce the two-dimensional wavelet leader pyramids to robustly extract the multiscale information of edges. Based on the wavelet leader pyramids, we further propose a visual information fidelity metric to evaluate the quality of images by quantifying the information loss between the original and the distorted images. Experimental results show that our method outperforms many state-of-the-art image quality metrics.

  1. Image quality assessment for CT used on small animals

    NASA Astrophysics Data System (ADS)

    Cisneros, Isabela Paredes; Agulles-Pedrós, Luis

    2016-07-01

    Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters using an acrylic phantom and then, using the computational tool MatLab, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.

  2. Dynamic flat panel detector versus image intensifier in cardiac imaging: dose and image quality

    NASA Astrophysics Data System (ADS)

    Vano, E.; Geiger, B.; Schreiner, A.; Back, C.; Beissel, J.

    2005-12-01

    The practical aspects of the dosimetric and imaging performance of a digital x-ray system for cardiology procedures were evaluated. The system was configured with an image intensifier (II) and later upgraded to a dynamic flat panel detector (FD). Entrance surface air kerma (ESAK) to phantoms of 16, 20, 24 and 28 cm of polymethyl methacrylate (PMMA) and the image quality of a test object were measured. Images were evaluated directly on the monitor and with numerical methods (noise and signal-to-noise ratio). Information contained in the DICOM header for dosimetry audit purposes was also tested. ESAK values per frame (or kerma rate) for the most commonly used cine and fluoroscopy modes for different PMMA thicknesses and for field sizes of 17 and 23 cm for II, and 20 and 25 cm for FD, produced similar results in the evaluated system with both technologies, ranging between 19 and 589 µGy/frame (cine) and 5 and 95 mGy min-1 (fluoroscopy). Image quality for these dose settings was better for the FD version. The 'study dosimetric report' is comprehensive, and its numerical content is sufficiently accurate. There is potential in the future to set those systems with dynamic FD to lower doses than are possible in the current II versions, especially for digital cine runs, or to benefit from improved image quality.

  3. Improving the Quality of Imaging in the Emergency Department.

    PubMed

    Blackmore, C Craig; Castro, Alexandra

    2015-12-01

    Imaging is critical for the care of emergency department (ED) patients. However, much of the imaging performed for acute care today is overutilization, creating substantial cost without significant benefit. Further, the value of imaging is not easily defined, as imaging only affects outcomes indirectly, through interaction with treatment. Improving the quality, including appropriateness, of emergency imaging requires understanding of how imaging contributes to patient care. The six-tier efficacy hierarchy of Fryback and Thornbury enables understanding of the value of imaging on multiple levels, ranging from technical efficacy to medical decision-making and higher-level patient and societal outcomes. The imaging efficacy hierarchy also allows definition of imaging quality through the Institute of Medicine (IOM)'s quality domains of safety, effectiveness, patient-centeredness, timeliness, efficiency, and equitability and provides a foundation for quality improvement. In this article, the authors elucidate the Fryback and Thornbury framework to define the value of imaging in the ED and to relate emergency imaging to the IOM quality domains. PMID:26568040

  4. Quaternion structural similarity: a new quality index for color images.

    PubMed

    Kolaman, Amir; Yadid-Pecht, Orly

    2012-04-01

    One of the most important issues for researchers developing image processing algorithms is image quality. Methodical quality evaluation, by showing images to several human observers, is slow, expensive, and highly subjective. On the other hand, a visual quality matrix (VQM) is a fast, cheap, and objective tool for evaluating image quality. Although most VQMs are good in predicting the quality of an image degraded by a single degradation, they poorly perform for a combination of two degradations. An example for such degradation is the color crosstalk (CTK) effect, which introduces blur with desaturation. CTK is expected to become a bigger issue in image quality as the industry moves toward smaller sensors. In this paper, we will develop a VQM that will be able to better evaluate the quality of an image degraded by a combined blur/desaturation degradation and perform as well as other VQMs on single degradations such as blur, compression, and noise. We show why standard scalar techniques are insufficient to measure a combined blur/desaturation degradation and explain why a vectorial approach is better suited. We introduce quaternion image processing (QIP), which is a true vectorial approach and has many uses in the fields of physics and engineering. Our new VQM is a vectorial expansion of structure similarity using QIP, which gave it its name-Quaternion Structural SIMilarity (QSSIM). We built a new database of a combined blur/desaturation degradation and conducted a quality survey with human subjects. An extensive comparison between QSSIM and other VQMs on several image quality databases-including our new database-shows the superiority of this new approach in predicting visual quality of color images. PMID:22203713

  5. Effect of image quality on calcification detection in digital mammography

    SciTech Connect

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-06-15

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  6. Effect of image quality on calcification detection in digital mammography

    PubMed Central

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  7. Image quality and dose efficiency of high energy phase sensitive x-ray imaging: Phantom studies

    PubMed Central

    Wong, Molly Donovan; Wu, Xizeng; Liu, Hong

    2014-01-01

    The goal of this preliminary study was to perform an image quality comparison of high energy phase sensitive imaging with low energy conventional imaging at similar radiation doses. The comparison was performed with the following phantoms: American College of Radiology (ACR), contrast-detail (CD), acrylic edge and tissue-equivalent. Visual comparison of the phantom images indicated comparable or improved image quality for all phantoms. Quantitative comparisons were performed through ACR and CD observer studies, both of which indicated higher image quality in the high energy phase sensitive images. The results of this study demonstrate the ability of high energy phase sensitive imaging to overcome existing challenges with the clinical implementation of phase contrast imaging and improve the image quality for a similar radiation dose as compared to conventional imaging near typical mammography energies. In addition, the results illustrate the capability of phase sensitive imaging to sustain the image quality improvement at high x-ray energies and for – breast – simulating phantoms, both of which indicate the potential to benefit fields such as mammography. Future studies will continue to investigate the potential for dose reduction and image quality improvement provided by high energy phase sensitive contrast imaging. PMID:24865208

  8. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  9. A new assessment method for image fusion quality

    NASA Astrophysics Data System (ADS)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  10. Raman chemical imaging system for food safety and quality inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Raman chemical imaging technique combines Raman spectroscopy and digital imaging to visualize composition and structure of a target, and it offers great potential for food safety and quality research. In this study, a laboratory-based Raman chemical imaging platform was designed and developed. The i...

  11. Study on the improvement of overall optical image quality via digital image processing

    NASA Astrophysics Data System (ADS)

    Tsai, Cheng-Mu; Fang, Yi Chin; Lin, Yu Chin

    2008-12-01

    This paper studies the effects of improving overall optical image quality via Digital Image Processing (DIP) and compares the promoted optical image with the non-processed optical image. Seen from the optical system, the improvement of image quality has a great influence on chromatic aberration and monochromatic aberration. However, overall image capture systems-such as cellphones and digital cameras-include not only the basic optical system but also many other factors, such as the electronic circuit system, transducer system, and so forth, whose quality can directly affect the image quality of the whole picture. Therefore, in this thesis Digital Image Processing technology is utilized to improve the overall image. It is shown via experiments that system modulation transfer function (MTF) based on the proposed DIP technology and applied to a comparatively bad optical system can be comparable to, even possibly superior to, the system MTF derived from a good optical system.

  12. Retinal image quality assessment through a visual similarity index

    NASA Astrophysics Data System (ADS)

    Pérez, Jorge; Espinosa, Julián; Vázquez, Carmen; Mas, David

    2013-04-01

    Retinal image quality is commonly analyzed through parameters inherited from instrumental optics. These parameters are defined for 'good optics' so they are hard to translate into visual quality metrics. Instead of using point or artificial functions, we propose a quality index that takes into account properties of natural images. These images usually show strong local correlations that help to interpret the image. Our aim is to derive an objective index that quantifies the quality of vision by taking into account the local structure of the scene, instead of focusing on a particular aberration. As we show, this index highly correlates with visual acuity and allows inter-comparison of natural images around the retina. The usefulness of the index is proven through the analysis of real eyes before and after undergoing corneal surgery, which usually are hard to analyze with standard metrics.

  13. Testing scanners for the quality of output images

    NASA Astrophysics Data System (ADS)

    Concepcion, Vicente P.; Nadel, Lawrence D.; D'Amato, Donald P.

    1995-01-01

    Document scanning is the means through which documents are converted to their digital image representation for electronic storage or distribution. Among the types of documents being scanned by government agencies are tax forms, patent documents, office correspondence, mail pieces, engineering drawings, microfilm, archived historical papers, and fingerprint cards. Increasingly, the resulting digital images are used as the input for further automated processing including: conversion to a full-text-searchable representation via machine printed or handwritten (optical) character recognition (OCR), postal zone identification, raster-to-vector conversion, and fingerprint matching. These diverse document images may be bi-tonal, gray scale, or color. Spatial sampling frequencies range from about 200 pixels per inch to over 1,000. The quality of the digital images can have a major effect on the accuracy and speed of any subsequent automated processing, as well as on any human-based processing which may be required. During imaging system design, there is, therefore, a need to specify the criteria by which image quality will be judged and, prior to system acceptance, to measure the quality of images produced. Unfortunately, there are few, if any, agreed-upon techniques for measuring document image quality objectively. In the output images, it is difficult to distinguish image degradation caused by the poor quality of the input paper or microfilm from that caused by the scanning system. We propose several document image quality criteria and have developed techniques for their measurement. These criteria include spatial resolution, geometric image accuracy, (distortion), gray scale resolution and linearity, and temporal and spatial uniformity. The measurement of these criteria requires scanning one or more test targets along with computer-based analyses of the test target images.

  14. No-reference visual quality assessment for image inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  15. Improvement of image quality by polarization mixing

    NASA Astrophysics Data System (ADS)

    Kasahara, Ryosuke; Itoh, Izumi; Hirai, Hideaki

    2014-03-01

    Information about the polarization of light is valuable because it contains information about the light source illuminating an object, the illumination angle, and the object material. However, polarization information strongly depends on the direction of the light source, and it is difficult to use a polarization image with various recognition algorithms outdoors because the angle of the sun varies. We propose an image enhancement method for utilizing polarization information in many such situations where the light source is not fixed. We take two approaches to overcome this problem. First, we compute an image that is the combination of a polarization image and the corresponding brightness image. Because of the angle of the light source, the polarization contains no information about some scenes. Therefore, it is difficult to use only polarization information in any scene for applications such as object detection. However, if we use a combination of a polarization image and a brightness image, the brightness image can complement the lack of scene information. The second approach is finding features that depend less on the direction of the light source. We propose a method for extracting scene features based on a calculation of the reflection model including polarization effects. A polarization camera that has micro-polarizers on each pixel of the image sensor was built and used for capturing images. We discuss examples that demonstrate the improved visibility of objects by applying our proposed method to, e.g., the visibility of lane markers on wet roads.

  16. Meat quality evaluation by hyperspectral imaging technique: an overview.

    PubMed

    Elmasry, Gamal; Barbin, Douglas F; Sun, Da-Wen; Allen, Paul

    2012-01-01

    During the last two decades, a number of methods have been developed to objectively measure meat quality attributes. Hyperspectral imaging technique as one of these methods has been regarded as a smart and promising analytical tool for analyses conducted in research and industries. Recently there has been a renewed interest in using hyperspectral imaging in quality evaluation of different food products. The main inducement for developing the hyperspectral imaging system is to integrate both spectroscopy and imaging techniques in one system to make direct identification of different components and their spatial distribution in the tested product. By combining spatial and spectral details together, hyperspectral imaging has proved to be a promising technology for objective meat quality evaluation. The literature presented in this paper clearly reveals that hyperspectral imaging approaches have a huge potential for gaining rapid information about the chemical structure and related physical properties of all types of meat. In addition to its ability for effectively quantifying and characterizing quality attributes of some important visual features of meat such as color, quality grade, marbling, maturity, and texture, it is able to measure multiple chemical constituents simultaneously without monotonous sample preparation. Although this technology has not yet been sufficiently exploited in meat process and quality assessment, its potential is promising. Developing a quality evaluation system based on hyperspectral imaging technology to assess the meat quality parameters and to ensure its authentication would bring economical benefits to the meat industry by increasing consumer confidence in the quality of the meat products. This paper provides a detailed overview of the recently developed approaches and latest research efforts exerted in hyperspectral imaging technology developed for evaluating the quality of different meat products and the possibility of its widespread

  17. Comorbidity Structure of Psychological Disorders in the Online e-PASS Data as Predictors of Psychosocial Adjustment Measures: Psychological Distress, Adequate Social Support, Self-Confidence, Quality of Life, and Suicidal Ideation

    PubMed Central

    Klein, Britt; Meyer, Denny

    2014-01-01

    Background A relative newcomer to the field of psychology, e-mental health has been gaining momentum and has been given considerable research attention. Although several aspects of e-mental health have been studied, 1 aspect has yet to receive attention: the structure of comorbidity of psychological disorders and their relationships with measures of psychosocial adjustment including suicidal ideation in online samples. Objective This exploratory study attempted to identify the structure of comorbidity of 21 psychological disorders assessed by an automated online electronic psychological assessment screening system (e-PASS). The resulting comorbidity factor scores were then used to assess the association between comorbidity factor scores and measures of psychosocial adjustments (ie, psychological distress, suicidal ideation, adequate social support, self-confidence in dealing with mental health issues, and quality of life). Methods A total of 13,414 participants were assessed using a complex online algorithm that resulted in primary and secondary Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition, Text Revision) diagnoses for 21 psychological disorders on dimensional severity scales. The scores on these severity scales were used in a principal component analysis (PCA) and the resulting comorbidity factor scores were related to 4 measures of psychosocial adjustments. Results A PCA based on 17 of the 21 psychological disorders resulted in a 4-factor model of comorbidity: anxiety-depression consisting of all anxiety disorders, major depressive episode (MDE), and insomnia; substance abuse consisting of alcohol and drug abuse and dependency; body image–eating consisting of eating disorders, body dysmorphic disorder, and obsessive-compulsive disorders; depression–sleep problems consisting of MDE, insomnia, and hypersomnia. All comorbidity factor scores were significantly associated with psychosocial measures of adjustment (P<.001). They were

  18. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  19. Method and tool for generating and managing image quality allocations through the design and development process

    NASA Astrophysics Data System (ADS)

    Sparks, Andrew W.; Olson, Craig; Theisen, Michael J.; Addiego, Chris J.; Hutchins, Tiffany G.; Goodman, Timothy D.

    2016-05-01

    Performance models for infrared imaging systems require image quality parameters; optical design engineers need image quality design goals; systems engineers develop image quality allocations to test imaging systems against. It is a challenge to maintain consistency and traceability amongst the various expressions of image quality. We present a method and parametric tool for generating and managing expressions of image quality during the system modeling, requirements specification, design, and testing phases of an imaging system design and development project.

  20. Image quality assessment using Takagi-Sugeno-Kang fuzzy model

    NASA Astrophysics Data System (ADS)

    Äńordević, Dragana; Kukolj, Dragan; Schelkens, Peter

    2015-03-01

    The main aim of the paper is to present a non-linear image quality assessment model based on a fuzzy logic estimator, namely the Takagi-Sugeno-Kang fuzzy model. This image quality assessment model uses a clustered space of input objective metrics. Main advantages of the introduced quality model are simplicity and understandably of its fuzzy rules. As reference model the polynomial 3 rd order model was chosen. The parameters of the Takagi-Sugeno-Kang fuzzy model are optimized in accordance to the mapping criteria of the selected set of input objective quality measures to the Mean Opinion Score (MOS) scale.

  1. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  2. Impact of image acquisition timing on image quality for dual energy contrast-enhanced breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Hill, Melissa L.; Mainprize, James G.; Puong, Sylvie; Carton, Ann-Katherine; Iordache, Razvan; Muller, Serge; Yaffe, Martin J.

    2012-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) image quality is affected by a large parameter space including the tomosynthesis acquisition geometry, imaging technique factors, the choice of reconstruction algorithm, and the subject breast characteristics. The influence of most of these factors on reconstructed image quality is well understood for DBT. However, due to the contrast agent uptake kinetics in CE imaging, the subject breast characteristics change over time, presenting a challenge for optimization . In this work we experimentally evaluate the sensitivity of the reconstructed image quality to timing of the low-energy and high-energy images and changes in iodine concentration during image acquisition. For four contrast uptake patterns, a variety of acquisition protocols were tested with different timing and geometry. The influence of the choice of reconstruction algorithm (SART or FBP) was also assessed. Image quality was evaluated in terms of the lesion signal-difference-to-noise ratio (LSDNR) in the central slice of DE CE-DBT reconstructions. Results suggest that for maximum image quality, the low- and high-energy image acquisitions should be made within one x-ray tube sweep, as separate low- and high-energy tube sweeps can degrade LSDNR. In terms of LSDNR per square-root dose, the image quality is nearly equal between SART reconstructions with 9 and 15 angular views, but using fewer angular views can result in a significant improvement in the quantitative accuracy of the reconstructions due to the shorter imaging time interval.

  3. Interplay between JPEG-2000 image coding and quality estimation

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2013-03-01

    Image quality and utility estimators aspire to quantify the perceptual resemblance and the usefulness of a distorted image when compared to a reference natural image, respectively. Image-coders, such as JPEG-2000, traditionally aspire to allocate the available bits to maximize the perceptual resemblance of the compressed image when compared to a reference uncompressed natural image. Specifically, this can be accomplished by allocating the available bits to minimize the overall distortion, as computed by a given quality estimator. This paper applies five image quality and utility estimators, SSIM, VIF, MSE, NICE and GMSE, within a JPEG-2000 encoder for rate-distortion optimization to obtain new insights on how to improve JPEG-2000 image coding for quality and utility applications, as well as to improve the understanding about the quality and utility estimators used in this work. This work develops a rate-allocation algorithm for arbitrary quality and utility estimators within the Post- Compression Rate-Distortion Optimization (PCRD-opt) framework in JPEG-2000 image coding. Performance of the JPEG-2000 image coder when used with a variety of utility and quality estimators is then assessed. The estimators fall into two broad classes, magnitude-dependent (MSE, GMSE and NICE) and magnitudeindependent (SSIM and VIF). They further differ on their use of the low-frequency image content in computing their estimates. The impact of these computational differences is analyzed across a range of images and bit rates. In general, performance of the JPEG-2000 coder below 1.6 bits/pixel with any of these estimators is highly content dependent, with the most relevant content being the amount of texture in an image and whether the strongest gradients in an image correspond to the main contours of the scene. Above 1.6 bits/pixel, all estimators produce visually equivalent images. As a result, the MSE estimator provides the most consistent performance across all images, while specific

  4. Digital Receptor Image Quality Evaluation: Effect of Different Filtration Schemes

    NASA Astrophysics Data System (ADS)

    Murphy, Simon; Christianson, Olav; Amurao, Maxwell; Samei, Ehsan

    2010-04-01

    The International Electrotechnical Commission provides a standard measurement methodology to provide performance intercomparison between imaging systems. Its formalism specifies beam quality based on half value layer attained by target kVp and additional Al filtration. Similar beam quality may be attained more conveniently using a filtration combination of Cu and Al. This study aimed to compare the two filtration schemes by their effects on image quality in terms of signal-difference-to-noise ratio, spatial resolution, exposure index, noise power spectrum, modulation transfer function, and detective quantum efficiency. A comparative assessment of the images was performed by analyzing commercially available image quality assessment phantom and by following the IEC 62220-3 formalism.

  5. Scanner-based image quality measurement system for automated analysis of EP output

    NASA Astrophysics Data System (ADS)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  6. Figure of Image Quality and Information Capacity in Digital Mammography

    PubMed Central

    Michail, Christos M.; Kalyvas, Nektarios E.; Valais, Ioannis G.; Fudos, Ioannis P.; Fountos, George P.; Dimitropoulos, Nikos; Kandarakis, Ioannis S.

    2014-01-01

    Objectives. In this work, a simple technique to assess the image quality characteristics of the postprocessed image is developed and an easy to use figure of image quality (FIQ) is introduced. This FIQ characterizes images in terms of resolution and noise. In addition information capacity, defined within the context of Shannon's information theory, was used as an overall image quality index. Materials and Methods. A digital mammographic image was postprocessed with three digital filters. Resolution and noise were calculated via the Modulation Transfer Function (MTF), the coefficient of variation, and the figure of image quality. In addition, frequency dependent parameters such as the noise power spectrum (NPS) and noise equivalent quanta (NEQ) were estimated and used to assess information capacity. Results. FIQs for the “raw image” data and the image processed with the “sharpen edges” filter were found 907.3 and 1906.1, correspondingly. The information capacity values were 60.86 × 103 and 78.96 × 103 bits/mm2. Conclusion. It was found that, after the application of the postprocessing techniques (even commercial nondedicated software) on the raw digital mammograms, MTF, NPS, and NEQ are improved for medium to high spatial frequencies leading to resolving smaller structures in the final image. PMID:24895593

  7. Image quality evaluation and control of computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Hiroshi; Yamaguchi, Takeshi; Uetake, Hiroki

    2016-03-01

    Image quality of the computer-generated holograms are usually evaluated subjectively. For example, the re- constructed image from the hologram is compared with other holograms, or evaluated by the double-stimulus impairment scale method to compare with the original image. This paper proposes an objective image quality evaluation of a computer-generated hologram by evaluating both diffraction efficiency and peak signal-to-noise ratio. Theory and numerical experimental results are shown on Fourier transform transmission hologram of both amplitude and phase modulation. Results without the optimized random phase show that the amplitude transmission hologram gives better peak signal-to noise ratio, but the phase transmission hologram provides about 10 times higher diffraction efficiency to the amplitude type. As an optimized phase hologram, Kinoform is evaluated. In addition, we investigate to control image quality by non-linear operation.

  8. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  9. Contrast vs noise effects on image quality

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Corse, N.; Rotman, Stanley R.; Kopeika, Norman S.

    1996-11-01

    Low noise images are contract-limited, and image restoration techniques can improve resolution significantly. However, as noise level increases, resolution improvements via image processing become more limited because image restoration increases noise. This research attempts to construct a reliable quantitative means of characterizing the perceptual difference between target and background. A method is suggested for evaluating the extent to which it is possible to discriminate an object which has merged with its surroundings, in noise-limited and contrast limited images, i.e., how hard it would be for an observer to recognize the object against various backgrounds as a function of noise level. The suggested model will be a first order model to begin with, using a regular bar-chart with additive uncorrelated Gaussian noise degraded by standard atmospheric blurring filters. The second phase will comprise a model dealing with higher-order images. This computational model relates the detectability or distinctness of the object to measurable parameters. It also must characterize human perceptual response, i.e. the model must develop metrics which are highly correlated to the ease or difficulty which the human observer experiences in discerning the target from its background. This requirement can be fulfilled only by conducting psychophysical experiments quantitatively comparing the perceptual evaluations of the observers with the results of the mathematical model.

  10. Objective image quality assessment based on support vector regression.

    PubMed

    Narwaria, Manish; Lin, Weisi

    2010-03-01

    Objective image quality estimation is useful in many visual processing systems, and is difficult to perform in line with the human perception. The challenge lies in formulating effective features and fusing them into a single number to predict the quality score. In this brief, we propose a new approach to address the problem, with the use of singular vectors out of singular value decomposition (SVD) as features for quantifying major structural information in images and then support vector regression (SVR) for automatic prediction of image quality. The feature selection with singular vectors is novel and general for gauging structural changes in images as a good representative of visual quality variations. The use of SVR exploits the advantages of machine learning with the ability to learn complex data patterns for an effective and generalized mapping of features into a desired score, in contrast with the oft-utilized feature pooling process in the existing image quality estimators; this is to overcome the difficulty of model parameter determination for such a system to emulate the related, complex human visual system (HVS) characteristics. Experiments conducted with three independent databases confirm the effectiveness of the proposed system in predicting image quality with better alignment with the HVS's perception than the relevant existing work. The tests with untrained distortions and databases further demonstrate the robustness of the system and the importance of the feature selection. PMID:20100674

  11. Imaging quality full chip verification for yield improvement

    NASA Astrophysics Data System (ADS)

    Yang, Qing; Zhou, CongShu; Quek, ShyueFong; Lu, Mark; Foong, YeeMei; Qiu, JianHong; Pandey, Taksh; Dover, Russell

    2013-04-01

    Basic image intensity parameters, like maximum and minimum intensity values (Imin and Imax), image logarithm slope (ILS), normalized image logarithm slope (NILS) and mask error enhancement factor (MEEF) , are well known as indexes of photolithography imaging quality. For full chip verification, hotspot detection is typically based on threshold values for line pinching or bridging. For image intensity parameters it is generally harder to quantify an absolute value to define where the process limit will occur, and at which process stage; lithography, etch or post- CMP. However it is easy to conclude that hot spots captured by image intensity parameters are more susceptible to process variation and very likely to impact yield. In addition these image intensity hot spots can be missed by using resist model verification because the resist model normally is calibrated by the wafer data on a single resist plane and is an empirical model which is trying to fit the resist critical dimension by some mathematic algorithm with combining optical calculation. Also at resolution enhancement technology (RET) development stage, full chip imaging quality check is also a method to qualify RET solution, like Optical Proximity Correct (OPC) performance. To add full chip verification using image intensity parameters is also not as costly as adding one more resist model simulation. From a foundry yield improvement and cost saving perspective, it is valuable to quantify the imaging quality to find design hot spots to correctly define the inline process control margin. This paper studies the correlation between image intensity parameters and process weakness or catastrophic hard failures at different process stages. It also demonstrated how OPC solution can improve full chip image intensity parameters. Rigorous 3D resist profile simulation across the full height of the resist stack was also performed to identify a correlation to the image intensity parameter. A methodology of post-OPC full

  12. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  13. Image quality based x-ray dose control in cardiac imaging

    NASA Astrophysics Data System (ADS)

    Davies, Andrew G.; Kengyelics, Stephen M.; Gislason-Lee, Amber J.

    2015-03-01

    An automated closed-loop dose control system balances the radiation dose delivered to patients and the quality of images produced in cardiac x-ray imaging systems. Using computer simulations, this study compared two designs of automatic x-ray dose control in terms of the radiation dose and quality of images produced. The first design, commonly in x-ray systems today, maintained a constant dose rate at the image receptor. The second design maintained a constant image quality in the output images. A computer model represented patients as a polymethylmetacrylate phantom (which has similar x-ray attenuation to soft tissue), containing a detail representative of an artery filled with contrast medium. The model predicted the entrance surface dose to the phantom and contrast to noise ratio of the detail as an index of image quality. Results showed that for the constant dose control system, phantom dose increased substantially with phantom size (x5 increase between 20 cm and 30 cm thick phantom), yet the image quality decreased by 43% for the same thicknesses. For the constant quality control, phantom dose increased at a greater rate with phantom thickness (>x10 increase between 20 cm and 30 cm phantom). Image quality based dose control could tailor the x-ray output to just achieve the quality required, which would reduce dose to patients where the current dose control produces images of too high quality. However, maintaining higher levels of image quality for large patients would result in a significant dose increase over current practice.

  14. Raman chemical imaging technology for food safety and quality evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Raman chemical imaging combines Raman spectroscopy and digital imaging to visualize composition and morphology of a target. This technique offers great potential for food safety and quality research. Most commercial Raman instruments perform measurement at microscopic level, and the spatial range ca...

  15. Concentration, size, mean lifetime, and noise effects on image quality in luminescence optical tomography

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Graber, Harry L.; Barbour, Randall L.

    1997-08-01

    The impact of background lumiphore in luminescence optical tomography is examined. To demonstrate its effects, numerical simulations were performed to calculate the diffusion-regime limiting form of forward-problem solutions for a specific test medium. Image reconstructions were performed using a CGD algorithm with a rescaling technique and positivity constraints. In addition, we develop a modification to the basic algorithm that makes use of the maximum possible concentration in order to estimate the background concentration, and show that it improves image quality when background lumiphore is present. We conclude that the usual measure of background lumiphore's effect, which is the target- to-background lumiphore concentration ratio, is not adequate to define the contribution from the background lumiphore. The reason for this is that image quality is also a function of target size and location. An alternative measure that we find superior is described. The results indicate that the improved algorithm yields better image quality for low target-to- background ratios.

  16. A feature-enriched completely blind image quality evaluator.

    PubMed

    Lin Zhang; Lei Zhang; Bovik, Alan C

    2015-08-01

    Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm. PMID:25915960

  17. A patient image-based technique to assess the image quality of clinical chest radiographs

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Samei, Ehsan; Luo, Hui; Dobbins, James T., III; McAdams, H. Page; Wang, Xiaohui; Sehnert, William J.; Barski, Lori; Foos, David H.

    2011-03-01

    Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of factors such as the capture system DQE and MTF, the exposure technique, and the particular image processing method and processing parameters. However, when assessing a clinical image, radiologists seldom refer to these factors, but rather examine several specific regions of the image to see whether the image is suitable for diagnosis. In this work, we developed a new strategy to learn and simulate radiologists' evaluation process on actual clinical chest images. Based on this strategy, a preliminary study was conducted on 254 digital chest radiographs (38 AP without grids, 35 AP with 6:1 ratio grids and 151 PA with 10:1 ratio grids). First, ten regional based perceptual qualities were summarized through an observer study. Each quality was characterized in terms of a physical quantity measured from the image, and as a first step, the three physical quantities in lung region were then implemented algorithmically. A pilot observer study was performed to verify the correlation between image perceptual qualities and physical quantitative qualities. The results demonstrated that our regional based metrics have promising performance for grading perceptual properties of chest radiographs.

  18. Analysis of the Effects of Image Quality on Digital Map Generation from Satellite Images

    NASA Astrophysics Data System (ADS)

    Kim, H.; Kim, D.; Kim, S.; Kim, T.

    2012-07-01

    High resolution satellite images are widely used to produce and update a digital map since they became widely available. It is well known that the accuracy of digital map produced from satellite images is decided largely by the accuracy of geometric modelling. However digital maps are made by a series of photogrammetric workflow. Therefore the accuracy of digital maps are also affected by the quality of satellite images, such as image interpretability. For satellite images, parameters such as Modulation Transfer Function(MTF), Signal to Noise Ratio(SNR) and Ground Sampling Distance(GSD) are used to present images quality. Our previous research stressed that such quality parameters may not represent the quality of image products such as digital maps and that parameters for image interpretability such as Ground Resolved Distance(GRD) and National Imagery Interpretability Rating Scale(NIIRS) need to be considered. In this study, we analyzed the effects of the image quality on accuracy of digital maps produced by satellite images. QuickBird, IKONOS and KOMPSAT-2 imagery were used to analyze as they have similar GSDs. We measured various image quality parameters mentioned above from these images. Then we produced digital maps from the images using a digital photogrammetric workstation. We analyzed the accuracy of the digital maps in terms of their location accuracy and their level of details. Then we compared the correlation between various image quality parameters and the accuracy of digital maps. The results of this study showed that GRD and NIIRS were more critical for map production then GSD, MTF or SNR.

  19. The influence of novel CT reconstruction technique and ECG-gated technique on image quality and patient dose of cardiac computed tomography.

    PubMed

    Dyakov, I; Stoinova, V; Groudeva, V; Vassileva, J

    2015-07-01

    The aim of the present study was to compare image quality and patient dose in cardiac computed tomography angiography (CTA) in terms of volume computed tomography dose index (CTDI vol), dose length product (DLP) and effective dose, when changing from filtered back projection (FBP) to adaptive iterative dose reduction (AIDR) reconstruction techniques. Further aim was to implement prospective electrocardiogram (ECG) gating for patient dose reduction. The study was performed with Aquilion ONE 320-row CT of Toshiba Medical Systems. Analysis of cardiac CT protocols was performed before and after integration of the new software. The AIDR technique showed more than 50 % reduction in CTDIvol values and 57 % in effective dose. The subjective evaluation of clinical images confirmed the adequate image quality acquired by the AIDR technique. The preliminary results indicated significant dose reduction when using prospective ECG gating by keeping the adequate diagnostic quality of clinical images. PMID:25836680

  20. Perceived quality of wood images influenced by the skewness of image histogram

    NASA Astrophysics Data System (ADS)

    Katsura, Shigehito; Mizokami, Yoko; Yaguchi, Hirohisa

    2015-08-01

    The shape of image luminance histograms is related to material perception. We investigated how the luminance histogram contributed to improvements in the perceived quality of wood images by examining various natural wood and adhesive vinyl sheets with printed wood grain. In the first experiment, we visually evaluated the perceived quality of wood samples. In addition, we measured the colorimetric parameters of the wood samples and calculated statistics of image luminance. The relationship between visual evaluation scores and image statistics suggested that skewness and kurtosis affected the perceived quality of wood. In the second experiment, we evaluated the perceived quality of wood images with altered luminance skewness and kurtosis using a paired comparison method. Our result suggests that wood images are more realistic if the skewness of the luminance histogram is slightly negative.

  1. Image Quality Evalutation on ALOS/PRISM and AVNIR-2

    NASA Astrophysics Data System (ADS)

    Mukaida, Akira; Imoto, Naritoshi; Tadono, Takeo; Murakami, Hiroshi; Kawamoto, Sachi

    2008-11-01

    Image quality evaluation on ALOS (Advanced Land Observing Satellite) / PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) and AVNIR-2 (Advanced Visible and Near Infrared Radiometer 2) has been carried out during operational phase. This is a report on result of evaluation for image quality in terms of MTF (Modulation Transfer Function) and SNR (Signal to Noise Ratio) for both PRISM and AVNIR-2. SNR of PRISM image has been increased following the up dating of radiometric correction and implementation of JPEG noise reduction filter. The result was in range of specification for both sensors.

  2. Effect of optical aberrations on image quality and visual performance

    NASA Astrophysics Data System (ADS)

    Ravikumar, Sowmya

    In addition to the effects of diffraction, retinal image quality in the human eye is degraded by optical aberrations. Although the paraxial geometric optics description of defocus consists of a simple blurred circle whose size determines the extent of blur, in reality the interactions between monochromatic and chromatic aberrations create a complex pattern of retinal image degradation. My thesis work hypothesizes that although both monochromatic and chromatic optical aberrations in general reduce image quality from best achievable, the underlying causes of retinal image quality degradation are characteristic of the nature of the aberration, its interactions with other aberrations as well as the composition of the stimulus. To establish a controlled methodology, a computational model of the retinal image with various levels of aberrations was used to create filters equivalent to those produced by real optical aberrations. Visual performance was measured psychophysically by using these special filters that separately modulated amplitude and phase in the retinal image. In order to include chromatic aberration into the optical interactions, a computational polychromatic model of the eye was created and validated. The model starts with monochromatic wavefront maps and derives a composite white light point-spread function whose quality was assessed using metrics of image quality. Finally, in order to assess the effectiveness of simultaneous multifocal intra-ocular lenses in correcting the eye's optical aberrations, a polychromatic computational model of a pseudophakic eye was constructed. This model incorporated the special chromatic properties unique to an eye corrected with hybrid refractive-diffractive optical elements. Results showed that normal optical aberrations reduced visual performance not only by reducing image contrast but also by altering the phase structure of the image. Longitudinal chromatic aberration had a greater effect on image quality in isolation

  3. Digital image quality measurements by objective and subjective methods from series of parametrically degraded images

    NASA Astrophysics Data System (ADS)

    Tachó, Aura; Mitjà, Carles; Martínez, Bea; Escofet, Jaume; Ralló, Miquel

    2013-11-01

    Many digital image applications like digitization of cultural heritage for preservation purposes operate with compressed files in one or more image observing steps. For this kind of applications JPEG compression is one of the most widely used. Compression level, final file size and quality loss are parameters that must be managed optimally. Although this loss can be monitored by means of objective image quality measurements, the real challenge is to know how it can be related with the perceived image quality by observers. A pictorial image has been degraded by two different procedures. The first, applying different levels of low pass filtering by convolving the image with progressively broad Gauss kernels. The second, saving the original file to a series of JPEG compression levels. In both cases, the objective image quality measurement is done by analysis of the image power spectrum. In order to obtain a measure of the perceived image quality, both series of degraded images are displayed on a computer screen organized in random pairs. The observers are compelled to choose the best image of each pair. Finally, a ranking is established applying Thurstone scaling method. Results obtained by both measurements are compared between them and with other objective measurement method as the Slanted Edge Test.

  4. Total Quality Can Help Your District's Image.

    ERIC Educational Resources Information Center

    Cokeley, Sandra

    1996-01-01

    Describes how educators in the Pearl River School District, Pearl River, New York, have implemented Total Quality Management (TQM) principles to evaluate and improve their effectiveness. Includes two charts that depict key indicators of financial and academic performance and a seven-year profile of the district's budget, enrollment, diploma rate,…

  5. Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc

    2015-03-01

    The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.

  6. Study of a water quality imager for coastal zone missions

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.; Harrison, E. F.; Wessel, V. W.

    1975-01-01

    The present work surveys water quality user requirements and then determines the general characteristics of an orbiting imager (the Applications Explorer, or AE) dedicated to the measurement of water quality, which could be used as a low-cost means of testing advanced imager concepts and assessing the ability of imager techniques to meet the goals of a comprehensive water quality monitoring program. The proposed imager has four spectral bands, a spatial resolution of 25 meters, and swath width of 36 km with a pointing capability of 330 km. Silicon photodetector arrays, pointing systems, and several optical features are included. A nominal orbit of 500 km altitude at an inclination of 50 deg is recommended.

  7. Quality evaluation of extra high quality images based on key assessment word

    NASA Astrophysics Data System (ADS)

    Kameda, Masashi; Hayashi, Hidehiko; Akamatsu, Shigeru; Miyahara, Makoto M.

    2001-06-01

    An all encompassing goal of our research is to develop an extra high quality imaging system which is able to convey a high level artistic impression faithfully. We have defined a high order sensation as such a high level artistic impression, and it is supposed that the high order sensation is expressed by the combination of the psychological factor which can be described by plural assessment words. In order to pursue the quality factors that are important for the reproduction of the high order sensation, we have focused on the image quality evaluation of the extra high quality images using the assessment words considering the high order sensation. In this paper, we have obtained the hierarchical structure between the collected assessment words and the principles of European painting based on the conveyance model of the high order sensation, and we have determined a key assessment word 'plasticity' which is able to evaluate the reproduction of the high order sensation more accurately. The results of the subjective assessment experiments using the prototype of the developed extra high quality imaging system have shown that the obtained key assessment word 'plasticity' is the most appropriate assessment word to evaluate the image quality of the extra high quality images quasi-quantitatively.

  8. Imaging quality assessment of multi-modal miniature microscope.

    PubMed

    Lee, Junwon; Rogers, Jeremy; Descour, Michael; Hsu, Elizabeth; Aaron, Jesse; Sokolov, Konstantin; Richards-Kortum, Rebecca

    2003-06-16

    We are developing a multi-modal miniature microscope (4M device) to image morphology and cytochemistry in vivo and provide better delineation of tumors. The 4M device is designed to be a complete microscope on a chip, including optical, micro-mechanical, and electronic components. It has advantages such as compact size and capability for microscopic-scale imaging. This paper presents an optics-only prototype 4M device, the very first imaging system made of sol-gel material. The microoptics used in the 4M device has a diameter of 1.3 mm. Metrology of the imaging quality assessment of the prototype device is presented. We describe causes of imaging performance degradation in order to improve the fabrication process. We built a multi-modal imaging test-bed to measure first-order properties and to assess the imaging quality of the 4M device. The 4M prototype has a field of view of 290 microm in diameter, a magnification of -3.9, a working distance of 250 microm and a depth of field of 29.6+/-6 microm. We report the modulation transfer function (MTF) of the 4M device as a quantitative metric of imaging quality. Based on the MTF data, we calculated a Strehl ratio of 0.59. In order to investigate the cause of imaging quality degradation, the surface characterization of lenses in 4M devices is measured and reported. We also imaged both polystyrene microspheres similar in size to epithelial cell nuclei and cervical cancer cells. Imaging results indicate that the 4M prototype can resolve cellular detail necessary for detection of precancer. PMID:19466016

  9. Image gathering and restoration - Information and visual quality

    NASA Technical Reports Server (NTRS)

    Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.

    1989-01-01

    A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.

  10. The effect of image quality and forensic expertise in facial image comparisons.

    PubMed

    Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice

    2015-03-01

    Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness. PMID:25537273

  11. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  12. Peripheral Aberrations and Image Quality for Contact Lens Correction

    PubMed Central

    Shen, Jie; Thibos, Larry N.

    2011-01-01

    Purpose Contact lenses reduced the degree of hyperopic field curvature present in myopic eyes and rigid contact lenses reduced sphero-cylindrical image blur on the peripheral retina, but their effect on higher order aberrations and overall optical quality of the eye in the peripheral visual field is still unknown. The purpose of our study was to evaluate peripheral wavefront aberrations and image quality across the visual field before and after contact lens correction. Methods A commercial Hartmann-Shack aberrometer was used to measure ocular wavefront errors in 5° steps out to 30° of eccentricity along the horizontal meridian in uncorrected eyes and when the same eyes are corrected with soft or rigid contact lenses. Wavefront aberrations and image quality were determined for the full elliptical pupil encountered in off-axis measurements. Results Ocular higher-order aberrations increase away from fovea in the uncorrected eye. Third-order aberrations are larger and increase faster with eccentricity compared to the other higher-order aberrations. Contact lenses increase all higher-order aberrations except 3rd-order Zernike terms. Nevertheless, a net increase in image quality across the horizontal visual field for objects located at the foveal far point is achieved with rigid lenses, whereas soft contact lenses reduce image quality. Conclusions Second order aberrations limit image quality more than higher-order aberrations in the periphery. Although second-order aberrations are reduced by contact lenses, the resulting gain in image quality is partially offset by increased amounts of higher-order aberrations. To fully realize the benefits of correcting higher-order aberrations in the peripheral field requires improved correction of second-order aberrations as well. PMID:21873925

  13. Image science and image-quality research in the Optical Sciences Center

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Myers, Kyle J.

    2014-09-01

    This paper reviews the history of research into imaging and image quality at the Optical Sciences Center (OSC), with emphasis on the period 1970-1990. The work of various students in the areas of psychophysical studies of human observers of images; mathematical model observers; image simulation and analysis, and the application of these methods to radiology and nuclear medicine is summarized. The rapid progress in computational power, at OSC and elsewhere, which enabled the steady advances in imaging and the emergence of a science of imaging, is also traced. The implications of these advances to ongoing research and the current Image Science curriculum at the College of Optical Sciences are discussed.

  14. APQ-102 imaging radar digital image quality study

    NASA Astrophysics Data System (ADS)

    Griffin, C. R.; Estes, J. M.

    1982-11-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  15. APQ-102 imaging radar digital image quality study

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1982-01-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  16. Validation of no-reference image quality index for the assessment of digital mammographic images

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  17. Influence of acquisition parameters on MV-CBCT image quality.

    PubMed

    Gayou, Olivier

    2012-01-01

    The production of high quality pretreatment images plays an increasing role in image-guided radiotherapy (IGRT) and adaptive radiation therapy (ART). Megavoltage cone-beam computed tomography (MV-CBCT) is the simplest solution of all the commercially available volumetric imaging systems for localization. It also suffers the most from relatively poor contrast due to the energy range of the imaging photons. Several avenues can be investigated to improve MV-CBCT image quality while maintaining an acceptable patient exposure: beam generation, detector technology, reconstruction parameters, and acquisition parameters. This article presents a study of the effects of the acquisition scan length and number of projections of a Siemens Artiste MV-CBCT system on image quality within the range provided by the manufacturer. It also discusses other aspects not related to image quality one should consider when selecting an acquisition protocol. Noise and uniformity were measured on the image of a cylindrical water phantom. Spatial resolution was measured using the same phantom half filled with water to provide a sharp water/air interface to derive the modulation transfer function (MTF). Contrast-to-noise ratio (CNR) was measured on a pelvis-shaped phantom with four inserts of different electron densities relative to water (1.043, 1.117, 1.513, and 0.459). Uniformity was independent of acquisition protocol. Noise decreased from 1.96% to 1.64% when the total number of projections was increased from 100 to 600 for a total exposure of 13.5 MU. The CNR showed a ± 5% dependence on the number of projections and 10% dependence on the scan length. However, these variations were not statistically significant. The spatial resolution was unaffected by the arc length or the sampling rate. Acquisition parameters have little to no effect on the image quality of the MV-CBCT system within the range of parameters available on the system. Considerations other than image quality, such as memory

  18. Evaluation of image quality in computed radiography based mammography systems

    NASA Astrophysics Data System (ADS)

    Singh, Abhinav; Bhwaria, Vipin; Valentino, Daniel J.

    2011-03-01

    Mammography is the most widely accepted procedure for the early detection of breast cancer and Computed Radiography (CR) is a cost-effective technology for digital mammography. We have demonstrated that CR mammography image quality is viable for Digital Mammography. The image quality of mammograms acquired using Computed Radiography technology was evaluated using the Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE). The measurements were made using a 28 kVp beam (RQA M-II) using 2 mm of Al as a filter and a target/filter combination of Mo/Mo. The acquired image bit depth was 16 bits and the pixel pitch for scanning was 50 microns. A Step-Wedge phantom (to measure the Contrast-to-noise ratio (CNR)) and the CDMAM 3.4 Contrast Detail phantom were also used to assess the image quality. The CNR values were observed at varying thickness of PMMA. The CDMAM 3.4 phantom results were plotted and compared to the EUREF acceptable and achievable values. The effect on image quality was measured using the physics metrics. A lower DQE was observed even with a higher MTF. This could be possibly due to a higher noise component present due to the way the scanner was configured. The CDMAM phantom scores demonstrated a contrast-detail comparable to the EUREF values. A cost-effective CR machine was optimized for high-resolution and high-contrast imaging.

  19. Analysis of image quality for laser display scanner test

    NASA Astrophysics Data System (ADS)

    Specht, H.; Kurth, S.; Billep, D.; Gessner, T.

    2009-02-01

    The scanning laser display technology is one of the most promising technologies for highly integrated projection display applications (e. g. in PDAs, mobile phones or head mounted displays) due to its advantages regarding image quality, miniaturization level and low cost potential. As a couple of research teams found during their investigations on laser scanning projections systems, the image quality of such systems is - beside from laser source and video signal processing - crucially determined by the scan engine, including MEMS scanner, driving electronics, scanning regime and synchronization. Even though a number of technical parameters can be measured with high accuracy, the test procedure is challenging because the influence of these parameters on image quality is often insufficiently understood. Thus, in many cases it is not clear how to define limiting values for characteristic parameters. In this paper the relationship between parameters characterizing the scan engine and their influence on image quality will be discussed. Those include scanner topography, geometry of the path of light as well as trajectory parameters. Understanding this enables a new methodology for testing and characterization of the scan engine, based on evaluation of one or a series of projected test images. Due to the fact that the evaluation process can be easily automated by digital image processing this methodology has the potential to become integrated into the production process of laser displays.

  20. Image quality assessment with manifold and machine learning

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Lebrun, Gilles; Lezoray, Olivier

    2009-01-01

    A crucial step in image compression is the evaluation of its performance, and more precisely the available way to measure the final quality of the compressed image. In this paper, a machine learning expert, providing a final class number is designed. The quality measure is based on a learned classification process in order to respect the one of human observers. Instead of computing a final note, our method classifies the quality using the quality scale recommended by the UIT. This quality scale contains 5 ranks ordered from 1 (the worst quality) to 5 (the best quality). This was done constructing a vector containing many visual attributes. Finally, the final features vector contains more than 40 attibutes. Unfortunatley, no study about the existing interactions between the used visual attributes has been done. A feature selection algorithm could be interesting but the selection is highly related to the further used classifier. Therefore, we prefer to perform dimensionality reduction instead of feature selection. Manifold Learning methods are used to provide a low-dimensional new representation from the initial high dimensional feature space. The classification process is performed on this new low-dimensional representation of the images. Obtained results are compared to the one obtained without applying the dimension reduction process to judge the efficiency of the method.

  1. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  2. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  3. Use of a line-pair resolution phantom for comprehensive quality assurance of electronic portal imaging devices based on fundamental imaging metrics

    SciTech Connect

    Gopal, Arun; Samant, Sanjiv S.

    2009-06-15

    Image guided radiation therapy solutions based on megavoltage computed tomography (MVCT) involve the extension of electronic portal imaging devices (EPIDs) from their traditional role of weekly localization imaging and planar dose mapping to volumetric imaging for 3D setup and dose verification. To sustain the potential advantages of MVCT, EPIDs are required to provide improved levels of portal image quality. Therefore, it is vital that the performance of EPIDs in clinical use is maintained at an optimal level through regular and rigorous quality assurance (QA). Traditionally, portal imaging QA has been carried out by imaging calibrated line-pair and contrast resolution phantoms and obtaining arbitrarily defined QA indices that are usually dependent on imaging conditions and merely indicate relative trends in imaging performance. They are not adequately sensitive to all aspects of image quality unlike fundamental imaging metrics such as the modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) that are widely used to characterize detector performance in radiographic imaging and would be ideal for QA purposes. However, due to the difficulty of performing conventional MTF measurements, they have not been used for routine clinical QA. The authors present a simple and quick QA methodology based on obtaining the MTF, NPS, and DQE of a megavoltage imager by imaging standard open fields and a bar-pattern QA phantom containing 2 mm thick tungsten line-pair bar resolution targets. Our bar-pattern based MTF measurement features a novel zero-frequency normalization scheme that eliminates normalization errors typically associated with traditional bar-pattern measurements at megavoltage x-ray energies. The bar-pattern QA phantom and open-field images are used in conjunction with an automated image analysis algorithm that quickly computes the MTF, NPS, and DQE of an EPID system. Our approach combines the fundamental advantages of

  4. Quality assurance of ultrasound imaging instruments by monitoring the monitor.

    PubMed

    Walker, J B; Thorne, G C; Halliwell, M

    1993-11-01

    Ultrasound quality assurance (QA) is a means of assuring the constant performance of an ultrasound instrument. A novel 'ultrasound image analyser' has been developed to allow objective, accurate and repeatable measurement of the image displayed on the ultrasound screen, i.e. as seen by the operator. The analyser uses a television camera/framestore combination to digitize and analyse this image. A QA scheme is described along with the procedures necessary to obtain a repeatable measurement of the image so that comparisons with earlier good images can be made. These include repositioning the camera and resetting the video display characteristics. The advantages of using the analyser over other methods are discussed. It is concluded that the analyser has distinct advantages over subjective image assessment methods and will be a valuable addition to current ultrasound QA programmes. PMID:8272435

  5. Investigation of perceptual attributes for mobile display image quality

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Xu, Haisong; Wang, Qing; Wang, Zhehong; Li, Haifeng

    2013-08-01

    Large-scale psychophysical experiments are carried out on two types of mobile displays to evaluate the perceived image quality (IQ). Eight perceptual attributes, i.e., naturalness, colorfulness, brightness, contrast, sharpness, clearness, preference, and overall IQ, are visually assessed via categorical judgment method for various application types of test images, which were manipulated by different methods. Their correlations are deeply discussed, and further factor analysis revealed the two essential components to describe the overall IQ, i.e., the component of image detail aspect and the component of color information aspect. Clearness and naturalness are regarded as two principal factors for natural scene images, whereas clearness and colorfulness were selected as key attributes affecting the overall IQ for other application types of images. Accordingly, based on these selected attributes, two kinds of empirical models are built to predict the overall IQ of mobile displays for different application types of images.

  6. The influence of noise on image quality in phase-diverse coherent diffraction imaging

    NASA Astrophysics Data System (ADS)

    Wittler, H. P. A.; van Riessen, G. A.; Jones, M. W. M.

    2016-02-01

    Phase-diverse coherent diffraction imaging provides a route to high sensitivity and resolution with low radiation dose. To take full advantage of this, the characteristics and tolerable limits of measurement noise for high quality images must be understood. In this work we show the artefacts that manifest in images recovered from simulated data with noise of various characteristics in the illumination and diffraction pattern. We explore the limits at which images of acceptable quality can be obtained and suggest qualitative guidelines that would allow for faster data acquisition and minimize radiation dose.

  7. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  8. Exploratory survey of image quality on CR digital mammography imaging systems in Mexico.

    PubMed

    Gaona, E; Rivera, T; Arreola, M; Franco, J; Molina, N; Alvarez, B; Azorín, C G; Casian, G

    2014-01-01

    The purpose of this study was to assess the current status of image quality and dose in computed radiographic digital mammography (CRDM) systems. Studies included CRDM systems of various models and manufacturers which dose and image quality comparisons were performed. Due to the recent rise in the use of digital radiographic systems in Mexico, CRDM systems are rapidly replacing conventional film-screen systems without any regard to quality control or image quality standards. Study was conducted in 65 mammography facilities which use CRDM systems in the Mexico City and surrounding States. The systems were tested as used clinically. This means that the dose and beam qualities were selected using the automatic beam selection and photo-timed features. All systems surveyed generate laser film hardcopies for the radiologist to read on a scope or mammographic high luminance light box. It was found that 51 of CRDM systems presented a variety of image artefacts and non-uniformities arising from inadequate acquisition and processing, as well as from the laser printer itself. Undisciplined alteration of image processing settings by the technologist was found to be a serious prevalent problem in 42 facilities. Only four of them showed an image QC program which is periodically monitored by a medical physicist. The Average Glandular Dose (AGD) in the surveyed systems was estimated to have a mean value of 2.4 mGy. To improve image quality in mammography and make more efficient screening mammographic in early detection of breast cancer is required new legislation. PMID:23938078

  9. Determination of pasture quality using airborne hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Pullanagari, R. R.; Kereszturi, G.; Yule, Ian J.; Irwin, M. E.

    2015-10-01

    Pasture quality is a critical determinant which influences animal performance (live weight gain, milk and meat production) and animal health. Assessment of pasture quality is therefore required to assist farmers with grazing planning and management, benchmarking between seasons and years. Traditionally, pasture quality is determined by field sampling which is laborious, expensive and time consuming, and the information is not available in real-time. Hyperspectral remote sensing has potential to accurately quantify biochemical composition of pasture over wide areas in great spatial detail. In this study an airborne imaging spectrometer (AisaFENIX, Specim) was used with a spectral range of 380-2500 nm with 448 spectral bands. A case study of a 600 ha hill country farm in New Zealand is used to illustrate the use of the system. Radiometric and atmospheric corrections, along with automatized georectification of the imagery using Digital Elevation Model (DEM), were applied to the raw images to convert into geocoded reflectance images. Then a multivariate statistical method, partial least squares (PLS), was applied to estimate pasture quality such as crude protein (CP) and metabolisable energy (ME) from canopy reflectance. The results from this study revealed that estimates of CP and ME had a R2 of 0.77 and 0.79, and RMSECV of 2.97 and 0.81 respectively. By utilizing these regression models, spatial maps were created over the imaged area. These pasture quality maps can be used for adopting precision agriculture practices which improves farm profitability and environmental sustainability.

  10. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  11. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  12. Radiation dose and image quality for paediatric interventional cardiology

    NASA Astrophysics Data System (ADS)

    Vano, E.; Ubeda, C.; Leyton, F.; Miranda, P.

    2008-08-01

    Radiation dose and image quality for paediatric protocols in a biplane x-ray system used for interventional cardiology have been evaluated. Entrance surface air kerma (ESAK) and image quality using a test object and polymethyl methacrylate (PMMA) phantoms have been measured for the typical paediatric patient thicknesses (4-20 cm of PMMA). Images from fluoroscopy (low, medium and high) and cine modes have been archived in digital imaging and communications in medicine (DICOM) format. Signal-to-noise ratio (SNR), figure of merit (FOM), contrast (CO), contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) have been computed from the images. Data on dose transferred to the DICOM header have been used to test the values of the dosimetric display at the interventional reference point. ESAK for fluoroscopy modes ranges from 0.15 to 36.60 µGy/frame when moving from 4 to 20 cm PMMA. For cine, these values range from 2.80 to 161.10 µGy/frame. SNR, FOM, CO, CNR and HCSR are improved for high fluoroscopy and cine modes and maintained roughly constant for the different thicknesses. Cumulative dose at the interventional reference point resulted 25-45% higher than the skin dose for the vertical C-arm (depending of the phantom thickness). ESAK and numerical image quality parameters allow the verification of the proper setting of the x-ray system. Knowing the increases in dose per frame when increasing phantom thicknesses together with the image quality parameters will help cardiologists in the good management of patient dose and allow them to select the best imaging acquisition mode during clinical procedures.

  13. Radiometric quality evaluation of ZY-02C satellite panchromatic image

    NASA Astrophysics Data System (ADS)

    Zhao, Fengfan; Sun, Ke; Yang, Lei

    2014-11-01

    As the second Chinese civilian high spatial resolution satellite, the ZY-02C satellite was successfully launched on December 22, 2011. In this paper, we used two different methods, subjective evaluation and external evaluation, to evaluate radiation quality of ZY-02C panchromatic image, meanwhile, we compared with quality of CBERS-02B, SPOT-5 satellite. The external evaluation could give us quantitative image quality. The EIFOV of ZY-02C, one of parameters, is less than SPOT-5. The results demonstrate the spatial resolution of ZY-02C is greater than SPOT-5. The subjective results show that the quality of SPOT-5 is little preferable to ZY-02C - CBERS-02B, and the quality of ZY-02C is better than CBERS-02B for most land-cover types. The results in the subjective evaluation and the external evaluation show the excellent agreement. Therefore the comprehensive result of the image quality will be got based on combining parameters introduced in this paper.

  14. Compressed image quality metric based on perceptually weighted distortion.

    PubMed

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  15. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  16. Flattening filter removal for improved image quality of megavoltage fluoroscopy

    SciTech Connect

    Christensen, James D.; Kirichenko, Alexander; Gayou, Olivier

    2013-08-15

    Purpose: Removal of the linear accelerator (linac) flattening filter enables a high rate of dose deposition with reduced treatment time. When used for megavoltage imaging, an unflat beam has reduced primary beam scatter resulting in sharper images. In fluoroscopic imaging mode, the unflat beam has higher photon count per image frame yielding higher contrast-to-noise ratio. The authors’ goal was to quantify the effects of an unflat beam on the image quality of megavoltage portal and fluoroscopic images.Methods: 6 MV projection images were acquired in fluoroscopic and portal modes using an electronic flat-panel imager. The effects of the flattening filter on the relative modulation transfer function (MTF) and contrast-to-noise ratio were quantified using the QC3 phantom. The impact of FF removal on the contrast-to-noise ratio of gold fiducial markers also was studied under various scatter conditions.Results: The unflat beam had improved contrast resolution, up to 40% increase in MTF contrast at the highest frequency measured (0.75 line pairs/mm). The contrast-to-noise ratio was increased as expected from the increased photon flux. The visualization of fiducial markers was markedly better using the unflat beam under all scatter conditions, enabling visualization of thin gold fiducial markers, the thinnest of which was not visible using the unflat beam.Conclusions: The removal of the flattening filter from a clinical linac leads to quantifiable improvements in the image quality of megavoltage projection images. These gains enable observers to more easily visualize thin fiducial markers and track their motion on fluoroscopic images.

  17. Body image quality of life in eating disorders

    PubMed Central

    Jáuregui Lobera, Ignacio; Bolaños Ríos, Patricia

    2011-01-01

    Purpose: The objective was to examine how body image affects quality of life in an eating-disorder (ED) clinical sample, a non-ED clinical sample, and a nonclinical sample. We hypothesized that ED patients would show the worst body image quality of life. We also hypothesized that body image quality of life would have a stronger negative association with specific ED-related variables than with other psychological and psychopathological variables, mainly among ED patients. On the basis of previous studies, the influence of gender on the results was explored, too. Patients and methods: The final sample comprised 70 ED patients (mean age 22.65 ± 7.76 years; 59 women and 11 men); 106 were patients with other psychiatric disorders (mean age 28.20 ± 6.52; 67 women and 39 men), and 135 were university students (mean age 21.57 ± 2.58; 81 women and 54 men), with no psychiatric history. After having obtained informed consent, the following questionnaires were administered: Body Image Quality of Life Inventory-Spanish version (BIQLI-SP), Eating Disorders Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results: The ED patients’ ratings on the BIQLI-SP were the lowest and negatively scored (BIQLI-SP means: +20.18, +5.14, and −6.18, in the student group, the non-ED patient group, and the ED group, respectively). The effect of body image on quality of life was more negative in the ED group in all items of the BIQLI-SP. Body image quality of life was negatively associated with specific ED-related variables, more than with other psychological and psychopathological variables, but not especially among ED patients. Conclusion: Body image quality of life was affected not only by specific pathologies related to body image disturbances, but also by other psychopathological syndromes. Nevertheless, the greatest effect was related to ED, and seemed to be more negative among men. This finding is the

  18. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  19. Impact of atmospheric aerosols on long range image quality

    NASA Astrophysics Data System (ADS)

    LeMaster, Daniel A.; Eismann, Michael T.

    2012-06-01

    Image quality in high altitude long range imaging systems can be severely limited by atmospheric absorption, scattering, and turbulence. Atmospheric aerosols contribute to this problem by scattering target signal out of the optical path and by scattering in unwanted light from the surroundings. Target signal scattering may also lead to image blurring though, in conventional modeling, this effect is ignored. The validity of this choice is tested in this paper by developing an aerosol modulation transfer function (MTF) model for an inhomogeneous atmosphere and then applying it to real-world scenarios using MODTRAN derived scattering parameters. The resulting calculations show that aerosol blurring can be effectively ignored.

  20. Pyramid wavefront sensor for image quality evaluation of optical system

    NASA Astrophysics Data System (ADS)

    Chen, Zhendong

    2015-08-01

    When the pyramid wavefront sensor is used to evaluate the imaging quality, placed at the focal plane of the aberrated optical system e.g., a telescope, it splits the light into four beams. Four images of the pupil are created on the detector and the detection signals of the pyramid wavefront sensor are calculated with these four intensity patterns, providing information on the derivatives of the aberrated wavefront. Based on the theory of the pyramid wavefront sensor, we are going to develop simulation software and a wavefront detector which can be used to test the imaging quality of the telescope. In our system, the subpupil image intensity through the pyramid sensor is calculated to obtain the aberration of wavefront where the piston, tilt, defocus, spherical, coma, astigmatism and other high level aberrations are separately represented by Zernike polynomials. The imaging quality of the optical system is then evaluated by the subsequent wavefront reconstruction. The performance of our system is to be checked by comparing with the measurements carried out using Puntino wavefront instrument (the method of SH wavefront sensor). Within this framework, the measurement precision of pyramid sensor will be discussed as well through detailed experiments. In general, this project would be very helpful both in our understanding of the principle of the wavefront reconstruction and its future technical applications. So far, we have produced the pyramid and established the laboratory setup of the image quality detecting system based on this wavefront sensor. Preliminary results are obtained, in that we have obtained the intensity images of the four pupils. Additional work is needed to analyze the characteristics of the pyramid wavefront sensor.

  1. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  2. Criterion to Evaluate the Quality of Infrared Small Target Images

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Diao, Wei-He

    2009-01-01

    In this paper, we propose a new criterion to estimate the quality of infrared small target images. To describe the criterion quantitatively, two indicators are defined. One is the “degree of target being confused” that represents the ability of infrared small target image to provide fake targets. The other one is the “degree of target being shielded”, which reflects the contribution of the image to shield the target. Experimental results reveal that this criterion is more robust than the traditional method (Signal-to-Noise Ratio). It is not only valid to infrared small target images which Signal-to-Noise Ratio could correctly describe, but also to the images that the traditional criterion could not accurately estimate. In addition, the results of this criterion can provide information about the cause of background interfering with target detection.

  3. DIANE stationary neutron radiography system image quality and industrial applications

    NASA Astrophysics Data System (ADS)

    Cluzeau, S.; Huet, J.; Le Tourneur, P.

    1994-05-01

    The SODERN neutron radiography laboratory has operated since February 1993 using a sealed tube generator (GENIE 46). An experimental programme of characterization (dosimetry, spectroscopy) has confirmed the expected performances concerning: neutron flux intensity, neutron energy range, residual gamma flux. Results are given in a specific report [2]. This paper is devoted to the image performance reporting. ASTM and specific indicators have been used to test the image quality with various converters and films. The corresponding modulation transfer functions are to be determined from image processing. Some industrial applications have demonstrated the capabilities of the system: corrosion detection in aircraft parts, ammunitions filling testing, detection of polymer lacks in sandwich steel sheets, detection of moisture in a probe for geophysics, residual ceramic cores imaging in turbine blades. Various computerized electronic imaging systems will be tested to improve the industrial capabilities.

  4. Image quality, space-qualified UV interference filters

    NASA Technical Reports Server (NTRS)

    Mooney, Thomas A.

    1992-01-01

    The progress during the contract period is described. The project involved fabrication of image quality, space-qualified bandpass filters in the 200-350 nm spectral region. Ion-assisted deposition (IAD) was applied to produce stable, reasonably durable filter coatings on space compatible UV substrates. Thin film materials and UV transmitting substrates were tested for resistance to simulated space effects.

  5. Perceived interest versus overt visual attention in image quality assessment

    NASA Astrophysics Data System (ADS)

    Engelke, Ulrich; Zhang, Wei; Le Callet, Patrick; Liu, Hantao

    2015-03-01

    We investigate the impact of overt visual attention and perceived interest on the prediction performance of image quality metrics. Towards this end we performed two respective experiments to capture these mechanisms: an eye gaze tracking experiment and a region-of-interest selection experiment. Perceptual relevance maps were created from both experiments and integrated into the design of the image quality metrics. Correlation analysis shows that indeed there is an added value of integrating these perceptual relevance maps. We reveal that the improvement in prediction accuracy is not statistically different between fixation density maps from eye gaze tracking data and region-of-interest maps, thus, indicating the robustness of different perceptual relevance maps for the performance gain of image quality metrics. Interestingly, however, we found that thresholding of region-of-interest maps into binary maps significantly deteriorates prediction performance gain for image quality metrics. We provide a detailed analysis and discussion of the results as well as the conceptual and methodological differences between capturing overt visual attention and perceived interest.

  6. SCID: full reference spatial color image quality metric

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Herbin, M.; Zagrouba, E.

    2009-01-01

    The most used full reference image quality assessments are error-based methods. Thus, these measures are performed by pixel based difference metrics like Delta E ( E), MSE, PSNR, etc. Therefore, a local fidelity of the color is defined. However, these metrics does not correlate well with the perceived image quality. Indeed, they omit the properties of the HVS. Thus, they cannot be a reliable predictor of the perceived visual quality. All this metrics compute the differences pixel to pixel. Therefore, a local fidelity of the color is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we present a novel full reference color metric that is based on characteristics of the human visual system by considering the notion of adjacency. This metric called SCID for Spatial Color Image Difference, is more perceptually correlated than other color differences such as Delta E. The suggested full reference metric is generic and independent of image distortion type. It can be used in different application such as: compression, restoration, etc.

  7. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  8. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  9. A Novel Image Quality Assessment With Globally and Locally Consilient Visual Quality Perception.

    PubMed

    Bae, Sung-Ho; Kim, Munchurl

    2016-05-01

    Computational models for image quality assessment (IQA) have been developed by exploring effective features that are consistent with the characteristics of a human visual system (HVS) for visual quality perception. In this paper, we first reveal that many existing features used in computational IQA methods can hardly characterize visual quality perception for local image characteristics and various distortion types. To solve this problem, we propose a new IQA method, called the structural contrast-quality index (SC-QI), by adopting a structural contrast index (SCI), which can well characterize local and global visual quality perceptions for various image characteristics with structural-distortion types. In addition to SCI, we devise some other perceptually important features for our SC-QI that can effectively reflect the characteristics of HVS for contrast sensitivity and chrominance component variation. Furthermore, we develop a modified SC-QI, called structural contrast distortion metric (SC-DM), which inherits desirable mathematical properties of valid distance metricability and quasi-convexity. So, it can effectively be used as a distance metric for image quality optimization problems. Extensive experimental results show that both SC-QI and SC-DM can very well characterize the HVS's properties of visual quality perception for local image characteristics and various distortion types, which is a distinctive merit of our methods compared with other IQA methods. As a result, both SC-QI and SC-DM have better performances with a strong consilience of global and local visual quality perception as well as with much lower computation complexity, compared with the state-of-the-art IQA methods. The MATLAB source codes of the proposed SC-QI and SC-DM are publicly available online at https://sites.google.com/site/sunghobaecv/iqa. PMID:27046873

  10. Optoelectronic complex inner product for evaluating quality of image segmentation

    NASA Astrophysics Data System (ADS)

    Power, Gregory J.; Awwal, Abdul Ahad S.

    2000-11-01

    In automatic target recognition and machine vision applications, segmentation of the images is a key step. Poor segmentation reduces the recognition performance. For some imaging systems such as MRI and Synthetic Aperture Radar (SAR) it is difficult even for humans to agree on the location of the edge which allows for segmentation. A real- time dynamic approach to determine the quality of segmentation can enable vision systems to refocus of apply appropriate algorithms to ensure high quality segmentation for recognition. A recent approach to evaluate the quality of image segmentation uses percent-pixels-different (PPD). For some cases, PPD provides a reasonable quality evaluation, but it has a weakness in providing a measure for how well the shape of the segmentation matches the true shape. This paper introduces the complex inner product approach for providing a goodness measure for evaluating the segmentation quality based on shape. The complex inner product approach is demonstrated on SAR target chips obtained from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). The results are compared to the PPD approach. A design for an optoelectronic implementation of the complex inner product for dynamic segmentation evaluation is introduced.

  11. Effects of display rendering on HDR image quality assessment

    NASA Astrophysics Data System (ADS)

    Zerman, Emin; Valenzise, Giuseppe; De Simone, Francesca; Banterle, Francesco; Dufaux, Frederic

    2015-09-01

    High dynamic range (HDR) displays use local backlight modulation to produce both high brightness levels and large contrast ratios. Thus, the display rendering algorithm and its parameters may greatly affect HDR visual experience. In this paper, we analyze the impact of display rendering on perceived quality for a specific display (SIM2 HDR47) and for a popular application scenario, i.e., HDR image compression. To this end, we assess whether significant differences exist between subjective quality of compressed images, when these are displayed using either the built-in rendering of the display, or a rendering algorithm developed by ourselves. As a second contribution of this paper, we investigate whether the possibility to estimate the true pixel-wise luminance emitted by the display, offered by our rendering approach, can improve the performance of HDR objective quality metrics that require true pixel-wise luminance as input.

  12. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  13. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures.

    PubMed

    Oszust, Mariusz

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  14. Structural similarity analysis for brain MR image quality assessment

    NASA Astrophysics Data System (ADS)

    Punga, Mirela Visan; Moldovanu, Simona; Moraru, Luminita

    2014-11-01

    Brain MR images are affected and distorted by various artifacts as noise, blur, blotching, down sampling or compression and as well by inhomogeneity. Usually, the performance of pre-processing operation is quantified by using the quality metrics as mean squared error and its related metrics such as peak signal to noise ratio, root mean squared error and signal to noise ratio. The main drawback of these metrics is that they fail to take the structural fidelity of the image into account. For this reason, we addressed to investigate the structural changes related to the luminance and contrast variation (as non-structural distortions) and to denoising process (as structural distortion)through an alternative metric based on structural changes in order to obtain the best image quality.

  15. Improving Image Quality of Bronchial Arteries with Virtual Monochromatic Spectral CT Images

    PubMed Central

    Ma, Guangming; He, Taiping; Yu, Yong; Duan, Haifeng; Yang, Chuangbo

    2016-01-01

    Objective To evaluate the clinical value of using monochromatic images in spectral CT pulmonary angiography to improve image quality of bronchial arteries. Methods We retrospectively analyzed the chest CT images of 38 patients who underwent contrast-enhanced spectral CT. These images included a set of 140kVp polychromatic images and the default 70keV monochromatic images. Using the standard Gemstone Spectral Imaging (GSI) viewer on an advanced workstation (AW4.6,GE Healthcare), an optimal energy level (in keV) for obtaining the best contrast-to-noise ratio (CNR) for the artery could be automatically obtained. The signal-to-noise ratio (SNR), CNR and objective image quality score (1–5) for these 3 image sets (140kVp, 70keV and optimal energy level) were obtained and, statistically compared. The image quality score consistency between the two observers was also evaluated using Kappa test. Results The optimal energy levels for obtaining the best CNR were 62.58±2.74keV.SNR and CNR from the 140kVp polychromatic, 70keV and optimal keV monochromatic images were (16.44±5.85, 13.24±5.52), (20.79±7.45, 16.69±6.27) and (24.9±9.91, 20.53±8.46), respectively. The corresponding subjective image quality scores were 1.97±0.82, 3.24±0.75, and 4.47±0.60. SNR, CNR and subjective scores had significant difference among groups (all p<0.001). The optimal keV monochromatic images were superior to the 70keV monochromatic and 140kVp polychromatic images, and there was high agreement between the two observers on image quality score (kappa>0.80). Conclusions Virtual monochromatic images at approximately 63keV in dual-energy spectral CT pulmonary angiography yielded the best CNR and highest diagnostic confidence for imaging bronchial arteries. PMID:26967737

  16. Image quality specification and maintenance for airborne SAR

    NASA Astrophysics Data System (ADS)

    Clinard, Mark S.

    2004-08-01

    Specification, verification, and maintenance of image quality over the lifecycle of an operational airborne SAR begin with the specification for the system itself. Verification of image quality-oriented specification compliance can be enhanced by including a specification requirement that a vendor provide appropriate imagery at the various phases of the system life cycle. The nature and content of the imagery appropriate for each stage of the process depends on the nature of the test, the economics of collection, and the availability of techniques to extract the desired information from the data. At the earliest lifecycle stages, Concept and Technology Development (CTD) and System Development and Demonstration (SDD), the test set could include simulated imagery to demonstrate the mathematical and engineering concepts being implemented thus allowing demonstration of compliance, in part, through simulation. For Initial Operational Test and Evaluation (IOT&E), imagery collected from precisely instrumented test ranges and targets of opportunity consisting of a priori or a posteriori ground-truthed cultural and natural features are of value to the analysis of product quality compliance. Regular monitoring of image quality is possible using operational imagery and automated metrics; more precise measurements can be performed with imagery of instrumented scenes, when available. A survey of image quality measurement techniques is presented along with a discussion of the challenges of managing an airborne SAR program with the scarce resources of time, money, and ground-truthed data. Recommendations are provided that should allow an improvement in the product quality specification and maintenance process with a minimal increase in resource demands on the customer, the vendor, the operational personnel, and the asset itself.

  17. Imaging quality automated measurement of image intensifier based on orthometric phase-shifting gratings.

    PubMed

    Sun, Song; Cao, Yiping

    2016-06-01

    A method for automatically measuring the imaging quality parameters of an image intensifier based on orthometric phase-shifting gratings (OPSG) is proposed. Two sets of phase-shifting gratings, one with a fringe direction at 45° and the other at 135°, are successively projected onto the input port of the image intensifier, and the corresponding deformed patterns modulated by the measured image intensifier on its output port are captured with a CCD camera. Two phases are retrieved from these two sets of deformed patterns by a phase-measuring algorithm. By building the relationship between these retrieved phases, the referential fringe period can be determined accurately. Meanwhile, the distorted phase distribution introduced by the image intensifier can also be efficiently separated wherein the subtle imaging quality information can be further decomposed. Subsequently, the magnification of the image intensifier is successfully measured by fringe period self-calibration. The experimental results have shown the feasibility of the proposed method, which can automatically measure the multiple imaging quality parameters of an image intensifier without human intervention. PMID:27411191

  18. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  19. Comprehensive quality assurance phantom for cardiovascular imaging systems

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Jan P.

    1998-07-01

    With the advent of high heat loading capacity x-ray tubes, high frequency inverter type generators, and the use of spectral shaping filters, the automatic brightness/exposure control (ABC) circuit logic employed in the new generation of angiographic imaging equipment has been significantly reprogrammed. These new angiographic imaging systems are designed to take advantage of the power train capabilities to yield higher contrast images while maintaining, or lower, the patient exposure. Since the emphasis of the imaging system design has been significantly altered, the system performance parameters one is interested and the phantoms employed for the quality assurance must also change in order to properly evaluate the imaging capability of the cardiovascular imaging systems. A quality assurance (QA) phantom has been under development in this institution and was submitted to various interested organizations such as American Association of Physicists in Medicine (AAPM), Society for Cardiac Angiography & Interventions (SCA&I), and National Electrical Manufacturers Association (NEMA) for their review and input. At the same time, in an effort to establish a unified standard phantom design for the cardiac catheterization laboratories (CCL), SCA&I and NEMA have formed a joint work group in early 1997 to develop a suitable phantom. The initial QA phantom design has since been accepted to serve as the base phantom by the SCA&I- NEMA Joint Work Group (JWG) from which a comprehensive QA Phantom is being developed.

  20. Investigation of grid performance using simple image quality tests

    PubMed Central

    Bor, Dogan; Birgul, Ozlem; Onal, Umran; Olgar, Turan

    2016-01-01

    Antiscatter grids improve the X-ray image contrast at a cost of patient radiation doses. The choice of appropriate grid or its removal requires a good knowledge of grid characteristics, especially for pediatric digital imaging. The aim of this work is to understand the relation between grid performance parameters and some numerical image quality metrics for digital radiological examinations. The grid parameters such as bucky factor (BF), selectivity (Σ), Contrast improvement factor (CIF), and signal-to-noise improvement factor (SIF) were determined following the measurements of primary, scatter, and total radiations with a digital fluoroscopic system for the thicknesses of 5, 10, 15, 20, and 25 cm polymethyl methacrylate blocks at the tube voltages of 70, 90, and 120 kVp. Image contrast for low- and high-contrast objects and high-contrast spatial resolution were measured with simple phantoms using the same scatter thicknesses and tube voltages. BF and SIF values were also calculated from the images obtained with and without grids. The correlation coefficients between BF values obtained using two approaches (grid parameters and image quality metrics) were in good agreement. Proposed approach provides a quick and practical way of estimating grid performance for different digital fluoroscopic examinations. PMID:27051166

  1. A study of image quality for radar image processing. [synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.

    1982-01-01

    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.

  2. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform. PMID:19813594

  3. Evaluation of image quality of a new CCD-based system for chest imaging

    NASA Astrophysics Data System (ADS)

    Sund, Patrik; Kheddache, Susanne; Mansson, Lars G.; Bath, Magnus; Tylen, Ulf

    2000-04-01

    The Imix radiography system (Qy Imix Ab, Finland)consists of an intensifying screen, optics, and a CCD camera. An upgrade of this system (Imix 2000) with a red-emitting screen and new optics has recently been released. The image quality of Imix (original version), Imix 200, and two storage-phosphor systems, Fuji FCR 9501 and Agfa ADC70 was evaluated in physical terms (DQE) and with visual grading of the visibility of anatomical structures in clinical images (141 kV). PA chest images of 50 healthy volunteers were evaluated by experienced radiologists. All images were evaluated on Siemens Simomed monitors, using the European Quality Criteria. The maximum DQE values for Imix, Imix 2000, Agfa and Fuji were 11%, 14%, 17% and 19%, respectively (141kV, 5μGy). Using the visual grading, the observers rated the systems in the following descending order. Fuji, Imix 2000, Agfa, and Imix. Thus, the upgrade to Imix 2000 resulted in higher DQE values and a significant improvement in clinical image quality. The visual grading agrees reasonably well with the DQE results; however, Imix 2000 received a better score than what could be expected from the DQE measurements. Keywords: CCD Technique, Chest Imaging, Digital Radiography, DQE, Image Quality, Visual Grading Analysis

  4. Effects of task and image properties on visual-attention deployment in image-quality assessment

    NASA Astrophysics Data System (ADS)

    Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid

    2015-03-01

    It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.

  5. No-reference image quality assessment in the spatial domain.

    PubMed

    Mittal, Anish; Moorthy, Anush Krishna; Bovik, Alan Conrad

    2012-12-01

    We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of "naturalness" in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation. PMID:22910118

  6. TL dosimetry for quality control of CR mammography imaging systems

    NASA Astrophysics Data System (ADS)

    Gaona, E.; Nieto, J. A.; Góngora, J. A. I. D.; Arreola, M.; Enríquez, J. G. F.

    The aim of this work is to estimate the average glandular dose with thermoluminescent (TL) dosimetry and comparison with quality imaging in computed radiography (CR) mammography. For a measuring dose, the Food and Drug Administration (FDA) and the American College of Radiology (ACR) use a phantom, so that dose and image quality are assessed with the same test object. The mammography is a radiological image to visualize early biological manifestations of breast cancer. Digital systems have two types of image-capturing devices, full field digital mammography (FFDM) and CR mammography. In Mexico, there are several CR mammography systems in clinical use, but only one system has been approved for use by the FDA. Mammography CR uses a photostimulable phosphor detector (PSP) system. Most CR plates are made of 85% BaFBr and 15% BaFI doped with europium (Eu) commonly called barium flourohalideE We carry out an exploratory survey of six CR mammography units from three different manufacturers and six dedicated X-ray mammography units with fully automatic exposure. The results show three CR mammography units (50%) have a dose greater than 3.0 mGy without demonstrating improved image quality. The differences between doses averages from TLD system and dosimeter with ionization chamber are less than 10%. TLD system is a good option for average glandular dose measurement for X-rays with a HVL (0.35-0.38 mmAl) and kVp (24-26) used in quality control procedures with ACR Mammography Accreditation Phantom.

  7. 21 CFR 1404.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Adequate evidence. 1404.900 Section 1404.900 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 1404.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a particular...

  8. 29 CFR 98.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Adequate evidence. 98.900 Section 98.900 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a...

  9. Image Quality Analysis of Various Gastrointestinal Endoscopes: Why Image Quality Is a Prerequisite for Proper Diagnostic and Therapeutic Endoscopy

    PubMed Central

    Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo

    2015-01-01

    Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality. PMID:26473119

  10. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  11. Study on classification of pork quality using hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Bai, Jun; Wang, Haibin

    2015-12-01

    The relative problems' research of chilled meat, thawed meat and spoiled meat discrimination by hyperspectral image technique were proposed, such the section of feature wavelengths, et al. First, based on 400 ~ 1000nm range hyperspectral image data of testing pork samples, by K-medoids clustering algorithm based on manifold distance, we select 30 important wavelengths from 753 wavelengths, and thus select 8 feature wavelengths (454.4, 477.5, 529.3, 546.8, 568.4, 580.3, 589.9 and 781.2nm) based on the discrimination value. Then 8 texture features of each image under 8 feature wavelengths were respectively extracted by two-dimensional Gabor wavelets transform as pork quality feature. Finally, we build a pork quality classification model using the fuzzy C-mean clustering algorithm. Through the experiment of extracting feature wavelengths, we found that although the hyperspectral images between adjacent bands have a strong linear correlation, they show a significant non-linear manifold relationship from the entire band. K-medoids clustering algorithm based on manifold distance used in this paper for selecting the characteristic wavelengths, which is more reasonable than traditional principal component analysis (PCA). Through the classification result, we conclude that hyperspectral imaging technology can distinguish among chilled meat, thawed meat and spoiled meat accurately.

  12. Automated quality assurance for image-guided radiation therapy.

    PubMed

    Schreibmann, Eduard; Elder, Eric; Fox, Tim

    2009-01-01

    The use of image-guided patient positioning requires fast and reliable Quality Assurance (QA) methods to ensure the megavoltage (MV) treatment beam coincides with the integrated kilovoltage (kV) or volumetric cone-beam CT (CBCT) imaging and guidance systems. Current QA protocol is based on visually observing deviations of certain features in acquired kV in-room treatment images such as markers, distances, or HU values from phantom specifications. This is a time-consuming and subjective task because these features are identified by human operators. The method implemented in this study automated an IGRT QA protocol by using specific image processing algorithms that rigorously detected phantom features and performed all measurements involved in a classical QA protocol. The algorithm was tested on four different IGRT QA phantoms. Image analysis algorithms were able to detect QA features with the same accuracy as the manual approach but significantly faster. All described tests were performed in a single procedure, with acquisition of the images taking approximately 5 minutes, and the automated software analysis taking less than 1 minute. The study showed that the automated image analysis based procedure may be used as a daily QA procedure because it is completely automated and uses a single phantom setup. PMID:19223842

  13. An Approach for Balancing Diagnostic Image Quality with Cancer Risk: Application to Pediatric Diagnostic Imaging of 99mTc-Dimercaptosuccinic Acid

    PubMed Central

    Sgouros, George; Frey, Eric C.; Bolch, Wesley E.; Wayson, Michael B.; Abadia, Andres F.; Treves, S. Ted

    2012-01-01

    A recent survey of pediatric hospitals showed a large variability in the activity administered for diagnostic nuclear medicine imaging of children. Imaging guidelines, especially for pediatric patients, must balance the risks associated with radiation exposure with the need to obtain the high-quality images necessary to derive the benefits of an accurate clinical diagnosis. Methods Pharmacokinetic modeling and a pediatric series of nonuniform rational B-spline–based phantoms have been used to simulate 99mTc-dimercaptosuccinic acid SPECT images. Images were generated for several different administered activities and for several lesions with different target-to-background activity concentration ratios; the phantoms were also used to calculate organ S values for 99mTc. Channelized Hotelling observer methodology was used in a receiver-operating-characteristic analysis of the diagnostic quality of images with different modeled administered activities (i.e., count densities) for anthropomorphic reference phantoms representing two 10-y-old girls with equal weights but different body morphometry. S value–based dosimetry was used to calculate the mean organ-absorbed doses to the 2 pediatric patients. Using BEIR VII age- and sex-specific risk factors, we converted absorbed doses to excess risk of cancer incidence and used them to directly assess the risk of the procedure. Results Combined, these data provided information about the tradeoff between cancer risk and diagnostic image quality for 2 phantoms having the same weight but different body morphometry. The tradeoff was different for the 2 phantoms, illustrating that weight alone may not be sufficient for optimally scaling administered activity in pediatric patients. Conclusion The study illustrates implementation of a rigorous approach for balancing the benefits of adequate image quality against the radiation risks and also demonstrates that weight-based adjustment to the administered activity is suboptimal

  14. DES exposure checker: Dark Energy Survey image quality control crowdsourcer

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Sheldon, Erin; Drlica-Wagner, Alex; Rykoff, Eli S.

    2015-11-01

    DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

  15. Metal artifact reduction and image quality evaluation of lumbar spine CT images using metal sinogram segmentation.

    PubMed

    Kaewlek, Titipong; Koolpiruck, Diew; Thongvigitmanee, Saowapak; Mongkolsuk, Manus; Thammakittiphan, Sastrawut; Tritrakarn, Siri-on; Chiewvit, Pipat

    2015-01-01

    Metal artifacts often appear in the images of computed tomography (CT) imaging. In the case of lumbar spine CT images, artifacts disturb the images of critical organs. These artifacts can affect the diagnosis, treatment, and follow up care of the patient. One approach to metal artifact reduction is the sinogram completion method. A mixed-variable thresholding (MixVT) technique to identify the suitable metal sinogram is proposed. This technique consists of four steps: 1) identify the metal objects in the image by using k-mean clustering with the soft cluster assignment, 2) transform the image by separating it into two sinograms, one of which is the sinogram of the metal object, with the surrounding tissue shown in the second sinogram. The boundary of the metal sinogram is then found by the MixVT technique, 3) estimate the new value of the missing data in the metal sinogram by linear interpolation from the surrounding tissue sinogram, 4) reconstruct a modified sinogram by using filtered back-projection and complete the image by adding back the image of the metal object into the reconstructed image to form the complete image. The quantitative and clinical image quality evaluation of our proposed technique demonstrated a significant improvement in image clarity and detail, which enhances the effectiveness of diagnosis and treatment. PMID:26756404

  16. Measuring image quality performance on image versions saved with different file format and compression ratio

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Escofet, Jaume; Bover, Toni

    2012-06-01

    Digitization of existing documents containing images is an important body of work for many archives ranging from individuals to institutional organizations. The methods and file formats used in this digitization is usually a trade off between budget, file volume size and image quality, while not necessarily in this order. The use of most commons and standardized file formats, JPEG and TIFF, prompts the operator to decide the compression ratio that affects both the final file volume size and the quality of the resulting image version. The evaluation of the image quality achieved by a system can be done by means of several measures and methods, being the Modulation Transfer Function (MTF) one of most used. The methods employed by the compression algorithms affect in a different way the two basic features of the image contents, edges and textures. Those basic features are too differently affected by the amount of noise generated at the digitization stage. Therefore, the target used in the measurement should be related with the features usually presents in general imaging. This work presents a comparison between the results obtained by measuring the MTF of images taken with a professional camera system and saved in several file formats compression ratios. In order to accomplish with the needs early stated, the MTF measurement has been done by two separate methods using the slanted edge and dead leaves targets respectively. The measurement results are shown and compared related with the respective file volume size.

  17. Exploring V1 by modeling the perceptual quality of images.

    PubMed

    Zhang, Fan; Jiang, Wenfei; Autrusseau, Florent; Lin, Weisi

    2014-01-01

    We propose an image quality model based on phase and amplitude differences between a reference and a distorted image. The proposed model is motivated by the fact that polar representations can separate visual information in a more independent and efficient manner than Cartesian representations in the primary visual cortex (V1). We subsequently estimate the model parameters from a large subjective data set using maximum likelihood methods. By comparing the various model hypotheses on the functional form about the phase and amplitude, we find that: (a) discrimination of visual orientation is important for quality assessment and yet a coarse level of such discrimination seems sufficient; and (b) a product-based amplitude-phase combination before pooling is effective, suggesting an interesting viewpoint about the functional structure of the simple cells and complex cells in V1. PMID:24464165

  18. Image quality vs. sensitivity: fundamental sensor system engineering

    NASA Astrophysics Data System (ADS)

    Schueler, Carl F.

    2008-08-01

    This paper focuses on the fundamental system engineering tradeoff driving almost all remote sensing design efforts, affecting complexity, cost, performance, schedule, and risk: image quality vs. sensitivity. This single trade encompasses every aspect of performance, including radiometric accuracy, dynamic range and precision, as well as spatial, spectral, and temporal coverage and resolution. This single trade also encompasses every aspect of design, including mass, dimensions, power, orbit selection, spacecraft interface, sensor and spacecraft functional trades, pointing or scanning architecture, sensor architecture (e.g., field-of-view, optical form, aperture, f/#, material properties), electronics, mechanical and thermal properties. The relationship between image quality and sensitivity is introduced based on the concepts of modulation transfer function (MTF) and signal-to-noise ratio (SNR) with examples to illustrate the balance to be achieved by the system architect to optimize cost, complexity, performance and risk relative to end-user requirements.

  19. Image-inpainting and quality-guided phase unwrapping algorithm.

    PubMed

    Meng, Lei; Fang, Suping; Yang, Pengcheng; Wang, Leijie; Komori, Masaharu; Kubo, Aizoh

    2012-05-01

    For the wrapped phase map with regional abnormal fringes, a new phase unwrapping algorithm that combines the image-inpainting theory and the quality-guided phase unwrapping algorithm is proposed. First, by applying a threshold to the modulation map, the valid region (i.e., the interference region) is divided into the doubtful region (called the target region during the inpainting period) and the reasonable one (the source region). The wrapped phase of the doubtful region is thought to be unreliable, and the data are abandoned temporarily. Using the region-filling image-inpainting method, the blank target region is filled with new data, while nothing is changed in the source region. A new wrapped phase map is generated, and then it is unwrapped with the quality-guided phase unwrapping algorithm. Finally, a postprocessing operation is proposed for the final result. Experimental results have shown that the performance of the proposed algorithm is effective. PMID:22614426

  20. High image quality sub 100 picosecond gated framing camera development

    SciTech Connect

    Price, R.H.; Wiedwald, J.D.

    1983-11-17

    A major challenge for laser fusion is the study of the symmetry and hydrodynamic stability of imploding fuel capsules. Framed x-radiographs of 10-100 ps duration, excellent image quality, minimum geometrical distortion (< 1%), dynamic range greater than 1000, and more than 200 x 200 pixels are required for this application. Recent progress on a gated proximity focused intensifier which meets these requirements is presented.

  1. SU-E-J-38: Improved DRR Image Quality Using Polyetheretherketone (PEEK) Fiducial in Image Guided Radiotherapy (IGRT)

    SciTech Connect

    Shen, S; Jacob, R; Popple, R; Duan, J; Wu, X; Cardan, R; Brezovich, I

    2015-06-15

    Purpose Fiducial-based imaging is often used in IGRT. Traditional gold fiducial marker often has substantial reconstruction artifacts. These artifacts Result in poor image quality of DRR for online kV-to-DRR matching. This study evaluated the image quality of PEEK in DRR in static and moving phantom. Methods CT scan of the Gold and PEEK fiducial (both 1×3 mm) was acquired in a 22 cm cylindrical phantom filled with water. Image artifacts was evaluated with maximum CT value deviated from water due to artifacts; volume of artifacts in 10×10 cm in the center slice; maximum length of streak artifacts from the fiducial. DRR resolution were measured using FWHM and FWTM. 4DCT of PEEK fiducial was acquired with the phantom moving sinusoidally in superior-inferior direction. Motion artifacts were assessed for various 4D phase angles. Results The maximum CT value deviation was −174 for Gold and −24 for PEEK. The volume of artifacts in a 10x10 cm 3 mm slice was 0.369 for Gold and 0.074 cm3 for PEEK. The maximum length of streak artifact was 80mm for Gold and 7 mm for PEEK. PEEK in DRR, FWHM was close to actual (1.0 mm for Gold and 1.1 mm for PEEK). FWTM was 1.8 mm for Gold and 1.3 mm for PEEK in DRR. Barrel motion artifact of PEEK fiducial was noticeable for free-breathing scan. The apparent PEEK length due to residual motion was in close agreement with the calculated length (13 mm for 30–70 phase, 10 mm in 40–60 phase). Conclusion Streak artifacts on planning CT associated with use of gold fiducial can be significantly reduced by PEEK fiducial, while having adequate kV image contrast. DRR image resolution at FWTM was improved from 1.8 mm to 1.3 mm. Because of this improvement, we have been routinely use PEEK for liver IGRT.

  2. Incorporating detection tasks into the assessment of CT image quality

    NASA Astrophysics Data System (ADS)

    Scalzetti, E. M.; Huda, W.; Ogden, K. M.; Khan, M.; Roskopf, M. L.; Ogden, D.

    2006-03-01

    The purpose of this study was to compare traditional and task dependent assessments of CT image quality. Chest CT examinations were obtained with a standard protocol for subjects participating in a lung cancer-screening project. Images were selected for patients whose weight ranged from 45 kg to 159 kg. Six ABR certified radiologists subjectively ranked these images using a traditional six-point ranking scheme that ranged from 1 (inadequate) to 6 (excellent). Three subtle diagnostic tasks were identified: (1) a lung section containing a sub-centimeter nodule of ground-glass opacity in an upper lung (2) a mediastinal section with a lymph node of soft tissue density in the mediastinum; (3) a liver section with a rounded low attenuation lesion in the liver periphery. Each observer was asked to estimate the probability of detecting each type of lesion in the appropriate CT section using a six-point scale ranging from 1 (< 10%) to 6 (> 90%). Traditional and task dependent measures of image quality were plotted as a function of patient weight. For the lung section, task dependent evaluations were very similar to those obtained using the traditional scoring scheme, but with larger inter-observer differences. Task dependent evaluations for the mediastinal section showed no obvious trend with subject weight, whereas there the traditional score decreased from ~4.9 for smaller subjects to ~3.3 for the larger subjects. Task dependent evaluations for the liver section showed a decreasing trend from ~4.1 for the smaller subjects to ~1.9 for the larger subjects, whereas the traditional evaluation had a markedly narrower range of scores. A task-dependent method of assessing CT image quality can be implemented with relative ease, and is likely to be more meaningful in the clinical setting.

  3. Radiometric Quality Evaluation of INSAT-3D Imager Data

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

    2014-11-01

    INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

  4. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  5. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Brettle, David S.; Treadgold, Laura A.; Sivananthan, Mohan; Davies, Andrew G.

    2015-09-01

    Cardiologists use x-ray image sequences of the moving heart acquired in real-time to diagnose and treat cardiac patients. The amount of radiation used is proportional to image quality; however, exposure to radiation is damaging to patients and personnel. The amount by which radiation dose can be reduced without compromising patient care was determined. For five patient image sequences, increments of computer-generated quantum noise (white + colored) were added to the images, frame by frame using pixel-to-pixel addition, to simulate corresponding increments of dose reduction. The noise adding software was calibrated for settings used in cardiac procedures, and validated using standard objective and subjective image quality measurements. The degraded images were viewed next to corresponding original (not degraded) images in a two-alternative-forced-choice staircase psychophysics experiment. Seven cardiologists and five radiographers selected their preferred image based on visualization of the coronary arteries. The point of subjective equality, i.e., level of degradation where the observer could not perceive a difference between the original and degraded images, was calculated; for all patients the median was 33%±15% dose reduction. This demonstrates that a 33%±15% increase in image noise is feasible without being perceived, indicating potential for 33%±15% dose reduction without compromising patient care.

  6. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  7. Pleiades-Hr Innovative Techniques for Radiometric Image Quality Commissioning

    NASA Astrophysics Data System (ADS)

    Blanchet, G.; Lebeque, L.; Fourest, S.; Latry, C.; Porez-Nadal, F.; Lacherade, S.; Thiebaut, C.

    2012-07-01

    The first Pleiades-HR satellite, part of a constellation of two, has been launched on December 17, 2011. This satellite produces high resolution optical images. In order to achieve good image quality, Pleiades-HR should first undergo an important 6 month commissioning phase period. This phase consists in calibrating and assessing the radiometric and geometric image quality to offer the best images to end users. This new satellite has benefited from technology improvements in various fields which make it stand out from other Earth observation satellites. In particular, its best-in-class agility performance enables new calibration and assessment techniques. This paper is dedicated to presenting these innovative techniques that have been tested for the first time for the Pleiades- HR radiometric commissioning. Radiometric activities concern compression, absolute calibration, detector normalization, and refocusing operations, MTF (Modulation Transfer Function) assessment, signal-to-noise ratio (SNR) estimation, and tuning of the ground processing parameters. The radiometric performances of each activity are summarized in this paper.

  8. Image gathering and digital restoration for fidelity and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1991-01-01

    The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.

  9. ECG-synchronized DSA exposure control: improved cervicothoracic image quality

    SciTech Connect

    Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.

    1984-10-01

    An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.

  10. Effects of characteristics of image quality in an immersive environment

    NASA Technical Reports Server (NTRS)

    Duh, Henry Been-Lirn; Lin, James J W.; Kenyon, Robert V.; Parker, Donald E.; Furness, Thomas A.

    2002-01-01

    Image quality issues such as field of view (FOV) and resolution are important for evaluating "presence" and simulator sickness (SS) in virtual environments (VEs). This research examined effects on postural stability of varying FOV, image resolution, and scene content in an immersive visual display. Two different scenes (a photograph of a fountain and a simple radial pattern) at two different resolutions were tested using six FOVs (30, 60, 90, 120, 150, and 180 deg.). Both postural stability, recorded by force plates, and subjective difficulty ratings varied as a function of FOV, scene content, and image resolution. Subjects exhibited more balance disturbance and reported more difficulty in maintaining posture in the wide-FOV, high-resolution, and natural scene conditions.

  11. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  12. Image quality criteria for wide-field x-ray imaging applications

    NASA Astrophysics Data System (ADS)

    Thompson, Patrick L.; Harvey, James E.

    1999-10-01

    For staring, wide-field applications, such as a solar x-ray imager, the severe off-axis aberrations of the classical Wolter Type-I grazing incidence x-ray telescope design drastically limits the 'resolution' near the solar limb. A specification upon on-axis fractional encircled energy is thus not an appropriate image quality criterion for such wide-angle applications. A more meaningful image quality criterion would be a field-weighted-average measure of 'resolution.' Since surface scattering effects from residual optical fabrication errors are always substantial at these very short wavelengths, the field-weighted-average half- power radius is a far more appropriate measure of aerial resolution. If an ideal mosaic detector array is being used in the focal plane, the finite pixel size provides a practical limit to this system performance. Thus, the total number of aerial resolution elements enclosed by the operational field-of-view, expressed as a percentage of the n umber of ideal detector pixels, is a further improved image quality criterion. In this paper we describe the development of an image quality criterion for wide-field applications of grazing incidence x-ray telescopes which leads to a new class of grazing incidence designs described in a following companion paper.

  13. Evaluation of scatter effects on image quality for breast tomosynthesis

    SciTech Connect

    Wu Gang; Mainprize, James G.; Boone, John M.; Yaffe, Martin J.

    2009-10-15

    Digital breast tomosynthesis uses a limited number (typically 10-20) of low-dose x-ray projections to produce a pseudo-three-dimensional volume tomographic reconstruction of the breast. The purpose of this investigation was to characterize and evaluate the effect of scattered radiation on the image quality for breast tomosynthesis. In a simulation, scatter point spread functions generated by a Monte Carlo simulation method were convolved over the breast projection to estimate the distribution of scatter for each angle of tomosynthesis projection. The results demonstrate that in the absence of scatter reduction techniques, images will be affected by cupping artifacts, and there will be reduced accuracy of attenuation values inferred from the reconstructed images. The effect of x-ray scatter on the contrast, noise, and lesion signal-difference-to-noise ratio (SDNR) in tomosynthesis reconstruction was measured as a function of the tumor size. When a with-scatter reconstruction was compared to one without scatter for a 5 cm compressed breast, the following results were observed. The contrast in the reconstructed central slice image of a tumorlike mass (14 mm in diameter) was reduced by 30%, the voxel value (inferred attenuation coefficient) was reduced by 28%, and the SDNR fell by 60%. The authors have quantified the degree to which scatter degrades the image quality over a wide range of parameters relevant to breast tomosynthesis, including x-ray beam energy, breast thickness, breast diameter, and breast composition. They also demonstrate, though, that even without a scatter rejection device, the contrast and SDNR in the reconstructed tomosynthesis slice are higher than those of conventional mammographic projection images acquired with a grid at an equivalent total exposure.

  14. A new algorithm for integrated image quality measurement based on wavelet transform and human visual system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    An essential determinant of the value of digital images is their quality. Over the past years, there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet transform and Human visual system. This way the proposed measure differentiates between the random and signal-dependant distortion, which have different effects on human observer. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated for quality evaluation. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.

  15. Cross-layer Energy Optimization Under Image Quality Constraints for Wireless Image Transmissions.

    PubMed

    Yang, Na; Demirkol, Ilker; Heinzelman, Wendi

    2012-01-01

    Wireless image transmission is critical in many applications, such as surveillance and environment monitoring. In order to make the best use of the limited energy of the battery-operated cameras, while satisfying the application-level image quality constraints, cross-layer design is critical. In this paper, we develop an image transmission model that allows the application layer (e.g., the user) to specify an image quality constraint, and optimizes the lower layer parameters of transmit power and packet length, to minimize the energy dissipation in image transmission over a given distance. The effectiveness of this approach is evaluated by applying the proposed energy optimization to a reference ZigBee system and a WiFi system, and also by comparing to an energy optimization study that does not consider any image quality constraint. Evaluations show that our scheme outperforms the default settings of the investigated commercial devices and saves a significant amount of energy at middle-to-large transmission distances. PMID:23508852

  16. Characterization of image quality for 3D scatter-corrected breast CT images

    NASA Astrophysics Data System (ADS)

    Pachon, Jan H.; Shah, Jainil; Tornai, Martin P.

    2011-03-01

    The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.

  17. Assessing and improving cobalt-60 digital tomosynthesis image quality

    NASA Astrophysics Data System (ADS)

    Marsh, Matthew B.; Schreiner, L. John; Kerr, Andrew T.

    2014-03-01

    Image guidance capability is an important feature of modern radiotherapy linacs, and future cobalt-60 units will be expected to have similar capabilities. Imaging with the treatment beam is an appealing option, for reasons of simplicity and cost, but the dose needed to produce cone beam CT (CBCT) images in a Co-60 treatment beam is too high for this modality to be clinically useful. Digital tomosynthesis (DT) offers a quasi-3D image, of sufficient quality to identify bony anatomy or fiducial markers, while delivering a much lower dose than CBCT. A series of experiments were conducted on a prototype Co-60 cone beam imaging system to quantify the resolution, selectivity, geometric accuracy and contrast sensitivity of Co-60 DT. Although the resolution is severely limited by the penumbra cast by the ~2 cm diameter source, it is possible to identify high contrast objects on the order of 1 mm in width, and bony anatomy in anthropomorphic phantoms is clearly recognizable. Low contrast sensitivity down to electron density differences of 3% is obtained, for uniform features of similar thickness. The conventional shift-and-add reconstruction algorithm was compared to several variants of the Feldkamp-Davis-Kress filtered backprojection algorithm result. The Co-60 DT images were obtained with a total dose of 5 to 15 cGy each. We conclude that Co-60 radiotherapy units upgraded for modern conformal therapy could also incorporate imaging using filtered backprojection DT in the treatment beam. DT is a versatile and promising modality that would be well suited to image guidance requirements.

  18. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    NASA Astrophysics Data System (ADS)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  19. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration.

    PubMed

    Saenz, Daniel L; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu's method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms. PMID:27494827

  20. SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes

    SciTech Connect

    Nelson, G

    2015-06-15

    Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth, Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.

  1. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  2. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  3. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  4. Image Quality of the Helioseismic and Magnetic Imager (HMI) Onboard the Solar Dynamics Observatory (SDO)

    NASA Technical Reports Server (NTRS)

    Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.

    2011-01-01

    We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.

  5. Comparison of image quality in computed laminography and tomography.

    PubMed

    Xu, Feng; Helfen, Lukas; Baumbach, Tilo; Suhonen, Heikki

    2012-01-16

    In computed tomography (CT), projection images of the sample are acquired over an angular range between 180 to 360 degrees around a rotation axis. A special case of CT is that of limited-angle CT, where some of the rotation angles are inaccessible, leading to artefacts in the reconstrucion because of missing information. The case of flat samples is considered, where the projection angles that are close to the sample surface are either i) completely unavailable or ii) very noisy due to the limited transmission at these angles. Computed laminography (CL) is an imaging technique especially suited for flat samples. CL is a generalization of CT that uses a rotation axis tilted by less than 90 degrees with respect to the incident beam. Thus CL avoids using projections from angles closest to the sample surface. We make a quantitative comparison of the imaging artefacts between CL and limited-angle CT for the case of a parallel-beam geometry. Both experimental and simulated images are used to characterize the effect of the artefacts on the resolution and visible image features. The results indicate that CL has an advantage over CT in cases when the missing angular range is a significant portion of the total angular range. In the case when the quality of the projections is limited by noise, CT allows a better tradeoff between the noise level and the missing angular range. PMID:22274425

  6. Patient dose and image quality from mega-voltage cone beam computed tomography imaging

    SciTech Connect

    Gayou, Olivier; Parda, David S.; Johnson, Mark; Miften, Moyed

    2007-02-15

    The evolution of ever more conformal radiation delivery techniques makes the subject of accurate localization of increasing importance in radiotherapy. Several systems can be utilized including kilo-voltage and mega-voltage cone-beam computed tomography (MV-CBCT), CT on rail or helical tomography. One of the attractive aspects of mega-voltage cone-beam CT is that it uses the therapy beam along with an electronic portal imaging device to image the patient prior to the delivery of treatment. However, the use of a photon beam energy in the mega-voltage range for volumetric imaging degrades the image quality and increases the patient radiation dose. To optimize image quality and patient dose in MV-CBCT imaging procedures, a series of dose measurements in cylindrical and anthropomorphic phantoms using an ionization chamber, radiographic films, and thermoluminescent dosimeters was performed. Furthermore, the dependence of the contrast to noise ratio and spatial resolution of the image upon the dose delivered for a 20-cm-diam cylindrical phantom was evaluated. Depending on the anatomical site and patient thickness, we found that the minimum dose deposited in the irradiated volume was 5-9 cGy and the maximum dose was between 9 and 17 cGy for our clinical MV-CBCT imaging protocols. Results also demonstrated that for high contrast areas such as bony anatomy, low doses are sufficient for image registration and visualization of the three-dimensional boundaries between soft tissue and bony structures. However, as the difference in tissue density decreased, the dose required to identify soft tissue boundaries increased. Finally, the dose delivered by MV-CBCT was simulated using a treatment planning system (TPS), thereby allowing the incorporation of MV-CBCT dose in the treatment planning process. The TPS-calculated doses agreed well with measurements for a wide range of imaging protocols.

  7. Decision theory applied to image quality control in radiology

    PubMed Central

    Lessa, Patrícia S; Caous, Cristofer A; Arantes, Paula R; Amaro, Edson; de Souza, Fernando M Campello

    2008-01-01

    Background The present work aims at the application of the decision theory to radiological image quality control (QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Methods Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Results Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. Conclusion The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision. PMID:19014545

  8. Image quality evaluation of breast tomosynthesis with synchrotron radiation

    SciTech Connect

    Malliori, A.; Bliznakova, K.; Speller, R. D.; Horrocks, J. A.; Rigon, L.; Tromba, G.; Pallikarakis, N.

    2012-09-15

    Purpose: This study investigates the image quality of tomosynthesis slices obtained from several acquisition sets with synchrotron radiation using a breast phantom incorporating details that mimic various breast lesions, in a heterogeneous background. Methods: A complex Breast phantom (MAMMAX) with a heterogeneous background and thickness that corresponds to 4.5 cm compressed breast with an average composition of 50% adipose and 50% glandular tissue was assembled using two commercial phantoms. Projection images using acquisition arcs of 24 Degree-Sign , 32 Degree-Sign , 40 Degree-Sign , 48 Degree-Sign , and 56 Degree-Sign at incident energy of 17 keV were obtained from the phantom with the synchrotron radiation for medical physics beamline at ELETTRA Synchrotron Light Laboratory. The total mean glandular dose was set equal to 2.5 mGy. Tomograms were reconstructed with simple multiple projection algorithm (MPA) and filtered MPA. In the latter case, a median filter, a sinc filter, and a combination of those two filters were applied on the experimental data prior to MPA reconstruction. Visual inspection, contrast to noise ratio, contrast, and artifact spread function were the figures of merit used in the evaluation of the visualisation and detection of low- and high-contrast breast features, as a function of the reconstruction algorithm and acquisition arc. To study the benefits of using monochromatic beams, single projection images at incident energies ranging from 14 to 27 keV were acquired with the same phantom and weighted to synthesize polychromatic images at a typical incident x-ray spectrum with W target. Results: Filters were optimised to reconstruct features with different attenuation characteristics and dimensions. In the case of 6 mm low-contrast details, improved visual appearance as well as higher contrast to noise ratio and contrast values were observed for the two filtered MPA algorithms that exploit the sinc filter. These features are better visualized

  9. Perceptual quality metric of color quantization errors on still images

    NASA Astrophysics Data System (ADS)

    Pefferkorn, Stephane; Blin, Jean-Louis

    1998-07-01

    A new metric for the assessment of color image coding quality is presented in this paper. Two models of chromatic and achromatic error visibility have been investigated, incorporating many aspects of human vision and color perception. The achromatic model accounts for both retinal and cortical phenomena such as visual sensitivity to spatial contrast and orientation. The chromatic metric is based on a multi-channel model of human color vision that is parameterized for video coding applications using psychophysical experiments, assuming that perception of color quantization errors can be assimilated to perception of supra-threshold local color-differences. The final metric is a merging of the chromatic model and the achromatic model which accounts for phenomenon as masking. The metric is tested on 6 real images at 5 quality levels using subjective assessments. The high correlation between objective and subjective scores shows that the described metric accurately rates the rendition of important features of the image such as color contours and textures.

  10. A hyperspectral imaging prototype for online quality evaluation of pickling cucumbers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A hyperspectral imaging prototype was developed for online evaluation of external and internal quality of pickling cucumbers. The prototype had several new, unique features including simultaneous reflectance and transmittance imaging and inline, real time calibration of hyperspectral images of each ...

  11. Image quality: An overview; Proceedings of the Meeting, Arlington, VA, April 9, 10, 1985

    NASA Astrophysics Data System (ADS)

    Granger, E. M.; Baker, L. R.

    1985-12-01

    Various papers on image quality are presented. The subjects discussed include: image quality considerations in transform coding, psychophysical approach to image quality, a decision theory approach to tone reproduction, Fourier analysis of image raggedness, lens performance assessment by image quality criteria, results of preliminary work on objective MRTD measurement, resolution requirements for binarization of line art, and problems of the visual display in flight simulation. Also addressed are: emittance in thermal imaging applications, optical performance requirements for thermal imaging lenses, dynamic motion measurement using digital TV speckle interferometry, quality assurance for borescopes, versatile projector test device, operational MTF for Landsat Thematic Mapper, operational use of color perception to enhance satellite image quality, theoretical bases and measurement of the MTF of integrated image sensors, measurement of the MTF of thermal and other video systems, and underflight calibration of the Landsat Thematic Mapper.

  12. 34 CFR 85.900 - Adequate evidence.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Definitions § 85.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a particular act or omission has occurred. Authority: E.O. 12549 (3 CFR, 1986 Comp., p. 189); E.O 12689 (3 CFR, 1989 Comp., p. 235); 20 U.S.C. 1082, 1094, 1221e-3 and 3474; and Sec....

  13. 29 CFR 452.110 - Adequate safeguards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 2 2010-07-01 2010-07-01 false Adequate safeguards. 452.110 Section 452.110 Labor... DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.110 Adequate safeguards. (a) In addition to the election safeguards discussed in this part, the Act contains a general mandate in section...

  14. 29 CFR 452.110 - Adequate safeguards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 2 2011-07-01 2011-07-01 false Adequate safeguards. 452.110 Section 452.110 Labor... DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.110 Adequate safeguards. (a) In addition to the election safeguards discussed in this part, the Act contains a general mandate in section...

  15. Functional magnetic resonance imaging of awake monkeys: some approaches for improving imaging quality

    PubMed Central

    Chen, Gang; Wang, Feng; Dillenburger, Barbara C.; Friedman, Robert M.; Chen, Li M.; Gore, John C.; Avison, Malcolm J.; Roe, Anna W.

    2011-01-01

    Functional magnetic resonance imaging (fMRI), at high magnetic field strength can suffer from serious degradation of image quality because of motion and physiological noise, as well as spatial distortions and signal losses due to susceptibility effects. Overcoming such limitations is essential for sensitive detection and reliable interpretation of fMRI data. These issues are particularly problematic in studies of awake animals. As part of our initial efforts to study functional brain activations in awake, behaving monkeys using fMRI at 4.7T, we have developed acquisition and analysis procedures to improve image quality with encouraging results. We evaluated the influence of two main variables on image quality. First, we show how important the level of behavioral training is for obtaining good data stability and high temporal signal-to-noise ratios. In initial sessions, our typical scan session lasted 1.5 hours, partitioned into short (<10 minutes) runs. During reward periods and breaks between runs, the monkey exhibited movements resulting in considerable image misregistrations. After a few months of extensive behavioral training, we were able to increase the length of individual runs and the total length of each session. The monkey learned to wait until the end of a block for fluid reward, resulting in longer periods of continuous acquisition. Each additional 60 training sessions extended the duration of each session by 60 minutes, culminating, after about 140 training sessions, in sessions that last about four hours. As a result, the average translational movement decreased from over 500 μm to less than 80 μm, a displacement close to that observed in anesthetized monkeys scanned in a 7 T horizontal scanner. Another major source of distortion at high fields arises from susceptibility variations. To reduce such artifacts, we used segmented gradient-echo echo-planar imaging (EPI) sequences. Increasing the number of segments significantly decreased susceptibility

  16. Image quality in CT: From physical measurements to model observers.

    PubMed

    Verdun, F R; Racine, D; Ott, J G; Tapiovaara, M J; Toroi, P; Bochud, F O; Veldkamp, W J H; Schegerer, A; Bouwman, R W; Giron, I Hernandez; Marshall, N W; Edyvean, S

    2015-12-01

    Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment. PMID:26459319

  17. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  18. Perceptual image quality assessment: recent progress and trends

    NASA Astrophysics Data System (ADS)

    Lin, Weisi; Narwaria, Manish

    2010-07-01

    Image quality assessment (IQA) is useful in many visual processing systems but challenging to perform in line with the human perception. A great deal of recent research effort has been directed towards IQA. In order to overcome the difficulty and infeasibility of subjective tests in many situations, the aim of such effort is to assess visual quality objectively towards better alignment with the perception of the Human Visual system (HVS). In this work, we review and analyze the recent progress in the areas related to IQA, as well as giving our views whenever possible. Following the recent trends, we discuss the engineering approach in more details, explore the related aspects for feature pooling, and present a case study with machine learning.

  19. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  20. Automated techniques for quality assurance of radiological image modalities

    NASA Astrophysics Data System (ADS)

    Goodenough, David J.; Atkins, Frank B.; Dyer, Stephen M.

    1991-05-01

    This paper will attempt to identify many of the important issues for quality assurance (QA) of radiological modalities. It is of course to be realized that QA can span many aspects of the diagnostic decision making process. These issues range from physical image performance levels to and through the diagnostic decision of the radiologist. We will use as a model for automated approaches a program we have developed to work with computed tomography (CT) images. In an attempt to unburden the user, and in an effort to facilitate the performance of QA, we have been studying automated approaches. The ultimate utility of the system is its ability to render in a safe and efficacious manner, decisions that are accurate, sensitive, specific and which are possible within the economic constraints of modern health care delivery.

  1. Sentinel-2 geometric image quality commissioning: first results

    NASA Astrophysics Data System (ADS)

    Languille, F.; Déchoz, C.; Gaudel, A.; Greslou, D.; de Lussy, F.; Trémas, T.; Poulain, V.

    2015-10-01

    In the frame of the Copernicus program of the European Comission, Sentinel-2 will offer multispectral highspatial- resolution optical images over global terrestrial surfaces. In cooperation with ESA, the Centre National d'Etudes Spatiales (CNES) is in charge of the image quality of the project, and will so ensure the CAL/VAL commissioning phase during the months following the launch. Sentinel-2 is a constellation of 2 satellites on a polar sun-synchronous orbit with a revisit time of 5 days (with both satellites), a high field of view - 290km, 13 spectral bands in visible and shortwave infrared, and high spatial resolution - 10m, 20m and 60m. The Sentinel-2 mission offers a global coverage over terrestrial surfaces. The satellites acquire systematically terrestrial surfaces under the same viewing conditions in order to have temporal images stacks. The first satellite has been launched in June 2015. Following the launch, the CAL/VAL commissioning phase will then last during 6 months for geometrical calibration. This paper first provides explanations about Sentinel-2 products delivered with geometric corrections. Then this paper details calibration sites, and the methods used for geometrical parameters calibration and presents the first linked results. The following topics are presented: viewing frames orientation assessment, focal plane mapping for all spectral bands, first results on geolocation assessment, and multispectral registration. There is a systematic images recalibration over a same reference which will be a set of S2 images produced during the 6 months of CAL/VAL. As it takes time to have all needed images, the geolocation performance with ground control points and the multitemporal performance are only first results and will be improved during the last phase of the CAL/VAL. So this paper mainly shows the system performances, the preliminary product performances and the way to perform them.

  2. SAR image quality effects of damped phase and amplitude errors

    NASA Astrophysics Data System (ADS)

    Zelenka, Jerry S.; Falk, Thomas

    The effects of damped multiplicative, amplitude, or phase errors on the image quality of synthetic-aperture radar systems are considered. These types of errors can result from aircraft maneuvers or the mechanical steering of an antenna. The proper treatment of damped multiplicative errors can lead to related design specifications and possibly an enhanced collection capability. Only small, high-frequency errors are considered. Expressions for the average intensity and energy associated with a damped multiplicative error are presented and used to derive graphic results. A typical example is used to show how to apply the results of this effort.

  3. An automated system for numerically rating document image quality

    SciTech Connect

    Cannon, M.; Kelly, P.; Iyengar, S.S.; Brener, N.

    1997-04-01

    As part of the Department of Energy document declassification program, the authors have developed a numerical rating system to predict the OCR error rate that they expect to encounter when processing a particular document. The rating algorithm produces a vector containing scores for different document image attributes such as speckle and touching characters. The OCR error rate for a document is computed from a weighted sum of the elements of the corresponding quality vector. The predicted OCR error rate will be used to screen documents that would not be handled properly with existing document processing products.

  4. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  5. Influence of slice overlap on positron emission tomography image quality

    NASA Astrophysics Data System (ADS)

    McKeown, Clare; Gillen, Gerry; Dempsey, Mary Frances; Findlay, Caroline

    2016-02-01

    PET scans use overlapping acquisition beds to correct for reduced sensitivity at bed edges. The optimum overlap size for the General Electric (GE) Discovery 690 has not been established. This study assesses how image quality is affected by slice overlap. Efficacy of 23% overlaps (recommended by GE) and 49% overlaps (maximum possible overlap) were specifically assessed. European Association of Nuclear Medicine (EANM) guidelines for calculating minimum injected activities based on overlap size were also reviewed. A uniform flood phantom was used to assess noise (coefficient of variation, (COV)) and voxel accuracy (activity concentrations, Bq ml-1). A NEMA (National Electrical Manufacturers Association) body phantom with hot/cold spheres in a background activity was used to assess contrast recovery coefficients (CRCs) and signal to noise ratios (SNR). Different overlap sizes and sphere-to-background ratios were assessed. COVs for 49% and 23% overlaps were 9% and 13% respectively. This increased noise was difficult to visualise on the 23% overlap images. Mean voxel activity concentrations were not affected by overlap size. No clinically significant differences in CRCs were observed. However, visibility and SNR of small, low contrast spheres (⩽13 mm diameter, 2:1 sphere to background ratio) may be affected by overlap size in low count studies if they are located in the overlap area. There was minimal detectable influence on image quality in terms of noise, mean activity concentrations or mean CRCs when comparing 23% overlap with 49% overlap. Detectability of small, low contrast lesions may be affected in low count studies—however, this is a worst-case scenario. The marginal benefits of increasing overlap from 23% to 49% are likely to be offset by increased patient scan times. A 23% overlap is therefore appropriate for clinical use. An amendment to EANM guidelines for calculating injected activities is also proposed which better reflects the effect overlap size has

  6. Improve the image quality of orbital 3 T diffusion-weighted magnetic resonance imaging with readout-segmented echo-planar imaging.

    PubMed

    Xu, Xiao-Quan; Liu, Jun; Hu, Hao; Su, Guo-Yi; Zhang, Yu-Dong; Shi, Hai-Bin; Wu, Fei-Yun

    2016-01-01

    The aim of our study is to compare the image quality of readout-segmented echo-planar imaging (rs-EPI) and that of standard single-shot EPI (ss-EPI) in orbital 3 T diffusion-weighted (DW) magnetic resonance (MR) imaging in healthy subjects. Forty-two volunteers underwent two sets of orbital DW imaging scan at a 3 T MR unit, and image quality was assessed qualitatively and quantitatively. As a result, we found that rs-EPI could provide better image quality than standard ss-EPI, while no significant difference was found on the apparent diffusion coefficient between the two sets of DW images. PMID:27317226

  7. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  8. A color image quality assessment using a reduced-reference image machine learning expert

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Lebrun, Gilles; Lezoray, Olivier

    2008-01-01

    A quality metric based on a classification process is introduced. The main idea of the proposed method is to avoid the error pooling step of many factors (in frequential and spatial domain) commonly applied to obtain a final quality score. A classification process based on final quality class with respect to the standard quality scale provided by the UIT. Thus, for each degraded color image, a feature vector is computed including several Human Visual System characteristics, such as, contrast masking effect, color correlation, and so on. Selected features are of two kinds: 1) full-reference features and 2) no-reference characteristics. That way, a machine learning expert, providing a final class number is designed.

  9. A novel technique of image quality objective measurement by wavelet analysis throughout the spatial frequency range

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong

    2005-01-01

    An essential determinant of the value of surrogate digital images is their quality. Image quality measurement has become crucial for most image processing applications. Over the past years , there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet analysis throughout the spatial frequency range. This is done by a detailed analysis of an image for a wide range of spatial frequency content, using a combination of modulation transfer function (MTF), brightness, contrast, saturation, sharpness and noise, as a more revealing metric for quality evaluation. A fast lifting wavelet algorithm is developed for computationally efficient spatial frequency analysis, where fine image detail corresponding to high spatial frequencies and image sharpness in regard to lower and mid -range spatial frequencies can be examined and compared accordingly. The wavelet frequency deconstruction is actually to extract the feature of edges in sub-band images. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated in assisting with quality analysis. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.

  10. A novel technique of image quality objective measurement by wavelet analysis throughout the spatial frequency range

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong

    2004-10-01

    An essential determinant of the value of surrogate digital images is their quality. Image quality measurement has become crucial for most image processing applications. Over the past years , there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet analysis throughout the spatial frequency range. This is done by a detailed analysis of an image for a wide range of spatial frequency content, using a combination of modulation transfer function (MTF), brightness, contrast, saturation, sharpness and noise, as a more revealing metric for quality evaluation. A fast lifting wavelet algorithm is developed for computationally efficient spatial frequency analysis, where fine image detail corresponding to high spatial frequencies and image sharpness in regard to lower and mid -range spatial frequencies can be examined and compared accordingly. The wavelet frequency deconstruction is actually to extract the feature of edges in sub-band images. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated in assisting with quality analysis. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.

  11. Measuring saliency in images: which experimental parameters for the assessment of image quality?

    NASA Astrophysics Data System (ADS)

    Fredembach, Clement; Woolfe, Geoff; Wang, Jue

    2012-01-01

    Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a

  12. Image quality degradation and retrieval errors introduced by registration and interpolation of multispectral digital images

    SciTech Connect

    Henderson, B.G.; Borel, C.C.; Theiler, J.P.; Smith, B.W.

    1996-04-01

    Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the Modulation Transfer Function (MTF) and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.

  13. Image quality and stability of image-guided radiotherapy (IGRT) devices: A comparative study

    PubMed Central

    Stock, Markus; Pasler, Marlies; Birkfellner, Wolfgang; Homolka, Peter; Poetter, Richard; Georg, Dietmar

    2010-01-01

    Introduction Our aim was to implement standards for quality assurance of IGRT devices used in our department and to compare their performances with that of a CT simulator. Materials and methods We investigated image quality parameters for three devices over a period of 16 months. A multislice CT was used as a benchmark and results related to noise, spatial resolution, low contrast visibility (LCV) and uniformity were compared with a cone beam CT (CBCT) at a linac and simulator. Results All devices performed well in terms of LCV and, in fact, exceeded vendor specifications. MTF was comparable between CT and linac CBCT. Integral nonuniformity was, on average, 0.002 for the CT and 0.006 for the linac CBCT. Uniformity, LCV and MTF varied depending on the protocols used for the linac CBCT. Contrast-to-noise ratio was an average of 51% higher for the CT than for the linac and simulator CBCT. No significant time trend was observed and tolerance limits were implemented. Discussion Reasonable differences in image quality between CT and CBCT were observed. Further research and development are necessary to increase image quality of commercially available CBCT devices in order for them to serve the needs for adaptive and/or online planning. PMID:19695725

  14. Relations between local and global perceptual image quality and visual masking

    NASA Astrophysics Data System (ADS)

    Alam, Md Mushfiqul; Patil, Pranita; Hagan, Martin T.; Chandler, Damon M.

    2015-03-01

    Perceptual quality assessment of digital images and videos are important for various image-processing applications. For assessing the image quality, researchers have often used the idea of visual masking (or distortion visibility) to design image-quality predictors specifically for the near-threshold distortions. However, it is still unknown that while assessing the quality of natural images, how the local distortion visibilities relate with the local quality scores. Furthermore, the summing mechanism of the local quality scores to predict the global quality scores is also crucial for better prediction of the perceptual image quality. In this paper, the local and global qualities of six images and six distortion levels were measured using subjective experiments. Gabor-noise target was used as distortion in the quality-assessment experiments to be consistent with our previous study [Alam, Vilankar, Field, and Chandler, Journal of Vision, 2014], in which the local root-mean-square contrast detection thresholds of detecting the Gabor-noise target were measured at each spatial location of the undistorted images. Comparison of the results of this quality-assessment experiment and the previous detection experiment shows that masking predicted the local quality scores more than 95% correctly above 15 dB threshold within 5% subject scores. Furthermore, it was found that an approximate squared summation of local-quality scores predicted the global quality scores suitably (Spearman rank-order correlation 0:97).

  15. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system

    PubMed Central

    Wood, T J; Beavis, A W; Saunderson, J R

    2013-01-01

    Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362

  16. Cone beam computed tomography radiation dose and image quality assessments.

    PubMed

    Lofthag-Hansen, Sara

    2010-01-01

    Diagnostic radiology has undergone profound changes in the last 30 years. New technologies are available to the dental field, cone beam computed tomography (CBCT) as one of the most important. CBCT is a catch-all term for a technology comprising a variety of machines differing in many respects: patient positioning, volume size (FOV), radiation quality, image capturing and reconstruction, image resolution and radiation dose. When new technology is introduced one must make sure that diagnostic accuracy is better or at least as good as the one it can be expected to replace. The CBCT brand tested was two versions of Accuitomo (Morita, Japan): 3D Accuitomo with an image intensifier as detector, FOV 3 cm x 4 cm and 3D Accuitomo FPD with a flat panel detector, FOVs 4 cm x 4 cm and 6 cm x 6 cm. The 3D Accuitomo was compared with intra-oral radiography for endodontic diagnosis in 35 patients with 46 teeth analyzed, of which 41 were endodontically treated. Three observers assessed the images by consensus. The result showed that CBCT imaging was superior with a higher number of teeth diagnosed with periapical lesions (42 vs 32 teeth). When evaluating 3D Accuitomo examinations in the posterior mandible in 30 patients, visibility of marginal bone crest and mandibular canal, important anatomic structures for implant planning, was high with good observer agreement among seven observers. Radiographic techniques have to be evaluated concerning radiation dose, which requires well-defined and easy-to-use methods. Two methods: CT dose index (CTDI), prevailing method for CT units, and dose-area product (DAP) were evaluated for calculating effective dose (E) for both units. An asymmetric dose distribution was revealed when a clinical situation was simulated. Hence, the CTDI method was not applicable for these units with small FOVs. Based on DAP values from 90 patient examinations effective dose was estimated for three diagnostic tasks: implant planning in posterior mandible and

  17. Comprehensive model for predicting perceptual image quality of smart mobile devices.

    PubMed

    Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng

    2015-01-01

    An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data. PMID:25967010

  18. No-reference remote sensing image quality assessment using a comprehensive evaluation factor

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Wang, Xu; Li, Xiao; Shao, Xiaopeng

    2014-05-01

    The conventional image quality assessment algorithm, such as Peak Signal to Noise Ratio (PSNR), Mean Square Error(MSE) and structural similarity (SSIM), needs the original image as a reference. It's not applicable to the remote sensing image for which the original image cannot be assumed to be available. In this paper, a No-reference Image Quality Assessment (NRIQA) algorithm is presented to evaluate the quality of remote sensing image. Since blur and noise (including the stripe noise) are the common distortion factors affecting remote sensing image quality, a comprehensive evaluation factor is modeled to assess the blur and noise by analyzing the image visual properties for different incentives combined with SSIM based on human visual system (HVS), and also to assess the stripe noise by using Phase Congruency (PC). The experiment results show this algorithm is an accurate and reliable method for Remote Sensing Image Quality Assessment.

  19. Construction of anthropomorphic hybrid, dual-lattice voxel models for optimizing image quality and dose in radiography

    NASA Astrophysics Data System (ADS)

    Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph

    2014-03-01

    In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.

  20. Image quality assessment using multi-method fusion.

    PubMed

    Liu, Tsung-Jung; Lin, Weisi; Kuo, C-C Jay

    2013-05-01

    A new methodology for objective image quality assessment (IQA) with multi-method fusion (MMF) is presented in this paper. The research is motivated by the observation that there is no single method that can give the best performance in all situations. To achieve MMF, we adopt a regression approach. The new MMF score is set to be the nonlinear combination of scores from multiple methods with suitable weights obtained by a training process. In order to improve the regression results further, we divide distorted images into three to five groups based on the distortion types and perform regression within each group, which is called "context-dependent MMF" (CD-MMF). One task in CD-MMF is to determine the context automatically, which is achieved by a machine learning approach. To further reduce the complexity of MMF, we perform algorithms to select a small subset from the candidate method set. The result is very good even if only three quality assessment methods are included in the fusion process. The proposed MMF method using support vector regression is shown to outperform a large number of existing IQA methods by a significant margin when being tested in six representative databases. PMID:23288335

  1. Performance of electronic portal imaging devices (EPIDs) used in radiotherapy: image quality and dose measurements.

    PubMed

    Cremers, F; Frenzel, Th; Kausch, C; Albers, D; Schönborn, T; Schmidt, R

    2004-05-01

    The aim of our study was to compare the image and dosimetric quality of two different imaging systems. The first one is a fluoroscopic electronic portal imaging device (first generation), while the second is based on an amorphous silicon flat-panel array (second generation). The parameters describing image quality include spatial resolution [modulation transfer function (MTF)], noise [noise power spectrum (NPS)], and signal-to-noise transfer [detective quantum efficiency (DQE)]. The dosimetric measurements were compared with ionization chamber as well as with film measurements. The response of the flat-panel imager and the fluoroscopic-optical device was determined performing a two-step Monte Carlo simulation. All measurements were performed in a 6 MV linear accelerator photon beam. The resolution (MTF) of the fluoroscopic device (f 1/2 = 0.3 mm(-1)) is larger than of the amorphous silicon based system (f 1/2 = 0.21 mm(-1)), which is due to the missing backscattered photons and the smaller pixel size. The noise measurements (NPS) show the correlation of neighboring pixels of the amorphous silicon electronic portal imaging device, whereas the NPS of the fluoroscopic system is frequency independent. At zero spatial frequency the DQE of the flat-panel imager has a value of 0.008 (0.8%). Due to the minor frequency dependency this device may be almost x-ray quantum limited. Monte Carlo simulations verified these characteristics. For the fluoroscopic imaging system the DQE at low frequencies is about 0.0008 (0.08%) and degrades with higher frequencies. Dose measurements with the flat-panel imager revealed that images can only be directly converted to portal dose images, if scatter can be neglected. Thus objects distant to the detector (e.g., inhomogeneous dose distribution generated by a modificator) can be verified dosimetrically, while objects close to a detector (e.g., a patient) cannot be verified directly and must be scatter corrected prior to verification. This is

  2. Image quality, tissue heating, and frame rate trade-offs in acoustic radiation force impulse imaging.

    PubMed

    Bouchard, Richard R; Dahl, Jeremy J; Hsu, Stephen J; Palmeri, Mark L; Trahey, Gregg E

    2009-01-01

    The real-time application of acoustic radiation force impulse (ARFI) imaging requires both short acquisition times for a single ARFI image and repeated acquisition of these frames. Due to the high energy of pulses required to generate appreciable radiation force, however, repeated acquisitions could result in substantial transducer face and tissue heating. We describe and evaluate several novel beam sequencing schemes which, along with parallel-receive acquisition, are designed to reduce acquisition time and heating. These techniques reduce the total number of radiation force impulses needed to generate an image and minimize the time between successive impulses. We present qualitative and quantitative analyses of the trade-offs in image quality resulting from the acquisition schemes. Results indicate that these techniques yield a significant improvement in frame rate with only moderate decreases in image quality. Tissue and transducer face heating resulting from these schemes is assessed through finite element method modeling and thermocouple measurements. Results indicate that heating issues can be mitigated by employing ARFI acquisition sequences that utilize the highest track-to-excitation ratio possible. PMID:19213633

  3. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  4. Quality Enhancement and Nerve Fibre Layer Artefacts Removal in Retina Fundus Images by Off Axis Imaging

    SciTech Connect

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul; Li, Yaquin; Tobin Jr, Kenneth William; Chaum, Edward

    2011-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relative low cost, these cameras are employed worldwide by retina specialists to diagnose diabetic retinopathy and other degenerative diseases. Even with relative ease of use, the images produced by these systems sometimes suffer from reflectance artefacts mainly due to the nerve fibre layer (NFL) or other camera lens related reflections. We propose a technique that employs multiple fundus images acquired from the same patient to obtain a single higher quality image without these reflectance artefacts. The removal of bright artefacts, and particularly of NFL reflectance, can have great benefits for the reduction of false positives in the detection of retinal lesions such as exudate, drusens and cotton wool spots by automatic systems or manual inspection. If enough redundant information is provided by the multiple images, this technique also compensates for a suboptimal illumination. The fundus images are acquired in straightforward but unorthodox manner, i.e. the stare point of the patient is changed between each shot but the camera is kept fixed. Between each shot, the apparent shape and position of all the retinal structures that do not exhibit isotropic reflectance (e.g. bright artefacts) change. This physical effect is exploited by our algorithm in order to extract the pixels belonging to the inner layers of the retina, hence obtaining a single artefacts-free image.

  5. Human vision model for the objective evaluation of perceived image quality applied to MRI and image restoration

    NASA Astrophysics Data System (ADS)

    Salem, Kyle A.; Wilson, David L.

    2002-12-01

    We are developing a method to objectively quantify image quality and applying it to the optimization of interventional magnetic resonance imaging (iMRI). In iMRI, images are used for live-time guidance of interventional procedures such as the minimally invasive treatment of cancer. Hence, not only does one desire high quality images, but they must also be acquired quickly. In iMRI, images are acquired in the Fourier domain, or k-space, and this allows many creative ways to image quickly such as keyhole imaging where k-space is preferentially subsampled, yielding suboptimal images at very high frame rates. Other techniques include spiral, radial, and the combined acquisition technique. We have built a perceptual difference model (PDM) that incorporates various components of the human visual system. The PDM was validated using subjective image quality ratings by naive observers and task-based measures defined by interventional radiologists. Using the PDM, we investigated the effects of various imaging parameters on image quality and quantified the degradation due to novel imaging techniques. Results have provided significant information about imaging time versus quality tradeoffs aiding the MR sequence engineer. The PDM has also been used to evaluate other applications such as Dixon fat suppressed MRI and image restoration. In image restoration, the PDM has been used to evaluate the Generalized Minimal Residual (GMRES) image restoration method and to examine the ability to appropriately determine a stopping condition for such iterative methods. The PDM has been shown to be an objective tool for measuring image quality and can be used to determine the optimal methodology for various imaging applications.

  6. Diffusion imaging quality control via entropy of principal direction distribution.

    PubMed

    Farzinfar, Mahshid; Oguz, Ipek; Smith, Rachel G; Verde, Audrey R; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C; Paterson, Sarah; Evans, Alan C; Styner, Martin A

    2013-11-15

    Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, "venetian blind" artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called

  7. Diffusion imaging quality control via entropy of principal direction distribution

    PubMed Central

    Oguz, Ipek; Smith, Rachel G.; Verde, Audrey R.; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L.; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C.; Paterson, Sarah; Evans, Alan C.; Styner, Martin A.

    2013-01-01

    Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here

  8. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min-1 with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels-1.

  9. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization.

    PubMed

    Hutcheson, Joshua A; Majid, Aneeka A; Powless, Amy J; Muldoon, Timothy J

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min(-1) with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels(-1). PMID:26429450

  10. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    SciTech Connect

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  11. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality. PMID:26478956

  12. Americans Getting Adequate Water Daily, CDC Finds

    MedlinePlus

    ... medlineplus/news/fullstory_158510.html Americans Getting Adequate Water Daily, CDC Finds Men take in an average ... new government report finds most are getting enough water each day. The data, from the U.S. National ...

  13. Americans Getting Adequate Water Daily, CDC Finds

    MedlinePlus

    ... gov/news/fullstory_158510.html Americans Getting Adequate Water Daily, CDC Finds Men take in an average ... new government report finds most are getting enough water each day. The data, from the U.S. National ...

  14. Beyond image quality: designing engaging interactions with digital products

    NASA Astrophysics Data System (ADS)

    de Ridder, Huib; Rozendaal, Marco C.

    2008-02-01

    Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed visually, but other criteria such as enjoyment, fun, engagement and hedonic quality are emerging. This paper deals with engagement, the intrinsically enjoyable readiness to put more effort into exploring and/or using a product than strictly required, thus attracting and keeping user's attention for a longer period of time. The impact of the experienced richness of an interface, both visually and degree of possible manipulations, was investigated in a series of experiments employing game-like user interfaces. This resulted in the extension of an existing conceptual framework relating engagement to richness by means of two intermediating variables, namely experienced challenge and sense of control. Predictions from this revised framework are evaluated against results of an earlier experiment assessing the ergonomic and hedonic qualities of interactive media. Test material consisted of interactive CD-ROM's containing presentations of three companies for future customers.

  15. Comparison of no-reference image quality assessment machine learning-based algorithms on compressed images

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Saadane, AbdelHakim; Fernandez-Maloigne, Christine

    2015-01-01

    No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications. The main goal of this paper is to perform a comparative study of seven well known no-reference learning-based image quality algorithms. To test the performance of these algorithms, three public databases are used. As a first step, the trial algorithms are compared when no new learning is performed. The second step investigates how the training set influences the results. The Spearman Rank Ordered Correlation Coefficient (SROCC) is utilized to measure and compare the performance. In addition, an hypothesis test is conducted to evaluate the statistical significance of performance of each tested algorithm.

  16. Content-weighted video quality assessment using a three-component image model

    NASA Astrophysics Data System (ADS)

    Li, Chaofeng; Bovik, Alan Conrad

    2010-01-01

    Objective image and video quality measures play important roles in numerous image and video processing applications. In this work, we propose a new content-weighted method for full-reference (FR) video quality assessment using a three-component image model. Using the idea that different image regions have different perceptual significance relative to quality, we deploy a model that classifies image local regions according to their image gradient properties, then apply variable weights to structural similarity image index (SSIM) [and peak signal-to-noise ratio (PSNR)] scores according to region. A frame-based video quality assessment algorithm is thereby derived. Experimental results on the Video Quality Experts Group (VQEG) FR-TV Phase 1 test dataset show that the proposed algorithm outperforms existing video quality assessment methods.

  17. Characterizing image quality in a scanning laser ophthalmoscope with differing pinholes and induced scattered light

    NASA Astrophysics Data System (ADS)

    Hunter, Jennifer J.; Cookson, Christopher J.; Kisilak, Marsha L.; Bueno, Juan M.; Campbell, Melanie C. W.

    2007-05-01

    We quantify the effects on scanning laser ophthalmoscope image quality of controlled amounts of scattered light, confocal pinhole diameter, and age. Optical volumes through the optic nerve head were recorded for a range of pinhole sizes in 12 subjects (19-64 years). The usefulness of various overall metrics in quantifying the changes in fundus image quality is assessed. For registered and averaged images, we calculated signal-to-noise ratio, entropy, and acutance. Entropy was best able to distinguish differing image quality. The optimum confocal pinhole diameter was found to be 50 μm (on the retina), providing improved axial resolution and image quality under all conditions.

  18. Image Quality Performance Measurement of the microPET Focus 120

    NASA Astrophysics Data System (ADS)

    Ballado, Fernando Trejo; López, Nayelli Ortega; Flores, Rafael Ojeda; Ávila-Rodríguez, Miguel A.

    2010-12-01

    The aim of this work is to evaluate the characteristics involved in the image reconstruction of the microPET Focus 120. For this evaluation were used two different phantoms; a miniature hot-rod Derenzo phantom and a National Electrical Manufacturers Association (NEMA) NU4-2008 image quality (IQ) phantom. The best image quality was obtained when using OSEM3D as the reconstruction method reaching a spatial resolution of 1.5 mm with the Derenzo phantom filled with 18F. Image quality test results indicate a superior image quality for the Focus 120 when compared to previous microPET models.

  19. Task-based measures of image quality and their relation to radiation dose and patient risk

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  20. Task-based measures of image quality and their relation to radiation dose and patient risk

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality.

  1. Task-based measures of image quality and their relation to radiation dose and patient risk.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Hoeschen, Christoph; Kupinski, Matthew A; Little, Mark P

    2015-01-21

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  2. SENTINEL-2 image quality and level 1 processing

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Baillarin, Simon; Gascon, Ferran; Hillairet, Emmanuel; Dechoz, Cécile; Lacherade, Sophie; Martimort, Philippe; Spoto, François; Henry, Patrice; Duca, Riccardo

    2009-08-01

    In the framework of the Global Monitoring for Environment and Security (GMES) programme, the European Space Agency (ESA) in partnership with the European Commission (EC) is developing the SENTINEL-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a twin satellites configuration deployed in polar sun-synchronous orbit and is designed to offer a unique combination of systematic global coverage with a wide field of view (290km), a high revisit (5 days at equator with two satellites), a high spatial resolution (10m, 20m and 60 m) and multi-spectral imagery (13 bands in the visible and the short wave infrared spectrum). SENTINEL-2 will ensure data continuity of SPOT and LANDSAT multispectral sensors while accounting for future service evolution. This paper presents the main geometric and radiometric image quality requirements for the mission. The strong multi-spectral and multi-temporal registration requirements constrain the stability of the platform and the ground processing which will automatically refine the geometric physical model through correlation technics. The geolocation of the images will take benefits from a worldwide reference data set made of SENTINEL-2 data strips geolocated through a global space-triangulation. These processing are detailed through the description of the level 1C production which will provide users with ortho-images of Top of Atmosphere reflectances. The huge amount of data (1.4 Tbits per orbit) is also a challenge for the ground processing which will produce at level 1C all the acquired data. Finally we discuss the different geometric (line of sight, focal plane cartography, ...) and radiometric (relative and absolute camera sensitivity) in-flight calibration methods that will take advantage of the on-board sun diffuser and ground targets to answer the severe mission requirements.

  3. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    PubMed Central

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-01-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  4. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases. PMID:27295675

  5. Investigation of the effect of subcutaneous fat on image quality performance of 2D conventional imaging and tissue harmonic imaging.

    PubMed

    Browne, Jacinta E; Watson, Amanda J; Hoskins, Peter R; Elliott, Alex T

    2005-07-01

    Tissue harmonic imaging (THI) has been reported to improve contrast resolution, tissue differentiation and overall image quality in clinical examinations. However, a study carried out previously by the authors (Brown et al. 2004) found improvements only in spatial resolution and not in contrast resolution or anechoic target detection. This result may have been due to the homogeneity of the phantom. Biologic tissues are generally inhomogeneous and THI has been reported to improve image quality in the presence of large amounts of subcutaneous fat. The aims of the study were to simulate the distortion caused by subcutaneous fat to image quality and thus investigate further the improvements reported in anechoic target detection and contrast resolution performance with THI compared with 2D conventional imaging. In addition, the effect of three different types of fat-mimicking layer on image quality was examined. The abdominal transducer of two ultrasound scanners with 2D conventional imaging and THI were tested, the 4C1 (Aspen-Acuson, Siemens Co., CA, USA) and the C5-2 (ATL HDI 5000, ATL/Philips, Amsterdam, The Netherlands). An ex vivo subcutaneous pig fat layer was used to replicate beam distortion and phase aberration seen clinically in the presence of subcutaneous fat. Three different types of fat-mimicking layers (olive oil, lard and lard with fish oil capsules) were evaluated. The subcutaneous pig fat layer demonstrated an improvement in anechoic target detection with THI compared with 2D conventional imaging, but no improvement was demonstrated in contrast resolution performance; a similar result was found in a previous study conducted by this research group (Brown et al. 2004) while using this tissue-mimicking phantom without a fat layer. Similarly, while using the layers of olive oil, lard and lard with fish oil capsules, improvements due to THI were found in anechoic target detection but, again, no improvements were found for contrast resolution for any of the

  6. Advancing the Quality of Solar Occultation Retrievals through Solar Imaging

    NASA Astrophysics Data System (ADS)

    Gordley, L. L.; Hervig, M. E.; Marshall, B. T.; Russell, J. E.; Bailey, S. M.; Brown, C. W.; Burton, J. C.; Deaver, L. E.; Magill, B. E.; McHugh, M. J.; Paxton, G. J.; Thompson, R. E.

    2008-12-01

    The quality of retrieved profiles (e.g. mixing ratio, temperature, pressure, and extinction) from solar occultation sensors is strongly dependent on the angular fidelity of the measurements. The SOFIE instrument, launched on-board the AIM (Aeronomy of Ice in the Mesosphere) satellite on April 25, 2007, was designed to provide very high precision broadband measurements for the study of Polar Mesospheric Clouds (PMCs), that appear near 83km, just below the high latitude summer mesopause. The SOFIE instrument achieves an unprecedented angular fidelity by imaging the sun on a 2D detector array and tracking the edges with an uncertainty of <0.1 arc seconds. This makes possible retrieved profiles of vertical high resolution mixing ratios, refraction base temperature and pressure from tropopause to lower mesosphere, and transmission with accuracy sufficient to infer cosmic smoke extinction. Details of the approach and recent results will be presented.

  7. The image quality of ion computed tomography at clinical imaging dose levels

    SciTech Connect

    Hansen, David C.; Bassler, Niels; Sørensen, Thomas Sangild; Seco, Joao

    2014-11-01

    Purpose: Accurately predicting the range of radiotherapy ions in vivo is important for the precise delivery of dose in particle therapy. Range uncertainty is currently the single largest contribution to the dose margins used in planning and leads to a higher dose to normal tissue. The use of ion CT has been proposed as a method to improve the range uncertainty and thereby reduce dose to normal tissue of the patient. A wide variety of ions have been proposed and studied for this purpose, but no studies evaluate the image quality obtained with different ions in a consistent manner. However, imaging doses ion CT is a concern which may limit the obtainable image quality. In addition, the imaging doses reported have not been directly comparable with x-ray CT doses due to the different biological impacts of ion radiation. The purpose of this work is to develop a robust methodology for comparing the image quality of ion CT with respect to particle therapy, taking into account different reconstruction methods and ion species. Methods: A comparison of different ions and energies was made. Ion CT projections were simulated for five different scenarios: Protons at 230 and 330 MeV, helium ions at 230 MeV/u, and carbon ions at 430 MeV/u. Maps of the water equivalent stopping power were reconstructed using a weighted least squares method. The dose was evaluated via a quality factor weighted CT dose index called the CT dose equivalent index (CTDEI). Spatial resolution was measured by the modulation transfer function. This was done by a noise-robust fit to the edge spread function. Second, the image quality as a function of the number of scanning angles was evaluated for protons at 230 MeV. In the resolution study, the CTDEI was fixed to 10 mSv, similar to a typical x-ray CT scan. Finally, scans at a range of CTDEI’s were done, to evaluate dose influence on reconstruction error. Results: All ions yielded accurate stopping power estimates, none of which were statistically

  8. Quality assurance of ultrasound imaging systems for target localization and online setup corrections.

    PubMed

    Tomé, Wolfgang A; Orton, Nigel P

    2008-01-01

    We describe quality assurance paradigms for ultrasound imaging systems for target localization (UISTL). To determine the absolute localization accuracy of a UISTL, an absolute coordinate system can be established in the treatment room and spherical targets at various depths can be localized. To test the ability of such a system to determine the magnitude of internal organ motion, a phantom that mimics the human male pelvic anatomy can be used to simulate different organ motion ranges. To assess the interuser variability of ultrasound (US) guidance, different experienced users can independently determine the daily organ shifts for the same patients for a number of consecutive fractions. The average accuracy for a UISTL for the localization of spherical targets at various depths has been found to be 0.57 +/- 0.47 mm in each spatial dimension for various focal depths. For the phantom organ motion test it was found that the true organ motion could be determined to within 1.0 mm along each axis. The variability between different experienced users who localized the same 5 patients for five consecutive fractions was small in comparison to the indicated shifts. In addition to the quality assurance tests that address the ability of a UISTL to accurately localize a target, a thorough quality assurance program should also incorporate the following two aspects to ensure consistent and accurate localization in daily clinical use: (1) adequate training and performance monitoring of users of the US target localization system, and (2) prescreening of patients who may not be good candidates for US localization. PMID:18406938

  9. Quality Assurance of Ultrasound Imaging Systems for Target Localization and Online Setup Corrections

    SciTech Connect

    Tome, Wolfgang A. Orton, Nigel P.

    2008-05-01

    We describe quality assurance paradigms for ultrasound imaging systems for target localization (UISTL). To determine the absolute localization accuracy of a UISTL, an absolute coordinate system can be established in the treatment room and spherical targets at various depths can be localized. To test the ability of such a system to determine the magnitude of internal organ motion, a phantom that mimics the human male pelvic anatomy can be used to simulate different organ motion ranges. To assess the interuser variability of ultrasound (US) guidance, different experienced users can independently determine the daily organ shifts for the same patients for a number of consecutive fractions. The average accuracy for a UISTL for the localization of spherical targets at various depths has been found to be 0.57 {+-} 0.47 mm in each spatial dimension for various focal depths. For the phantom organ motion test it was found that the true organ motion could be determined to within 1.0 mm along each axis. The variability between different experienced users who localized the same 5 patients for five consecutive fractions was small in comparison to the indicated shifts. In addition to the quality assurance tests that address the ability of a UISTL to accurately localize a target, a thorough quality assurance program should also incorporate the following two aspects to ensure consistent and accurate localization in daily clinical use: (1) adequate training and performance monitoring of users of the US target localization system, and (2) prescreening of patients who may not be good candidates for US localization.

  10. Imaging-based logics for ornamental stone quality chart definition

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Gargiulo, Aldo; Serranti, Silvia; Raspi, Costantino

    2007-02-01

    Ornamental stone products are commercially classified on the market according to several factors related both to intrinsic lythologic characteristics and to their visible pictorial attributes. Sometimes these latter aspects prevail in quality criteria definition and assessment. Pictorial attributes are in any case also influenced by the performed working actions and the utilized tools selected to realize the final stone manufactured product. Stone surface finishing is a critical task because it can contribute to enhance certain aesthetic features of the stone itself. The study was addressed to develop an innovative set of methodologies and techniques able to quantify the aesthetic quality level of stone products taking into account both the physical and the aesthetical characteristics of the stones. In particular, the degree of polishing of the stone surfaces and the presence of defects have been evaluated, applying digital image processing strategies. Morphological and color parameters have been extracted developing specific software architectures. Results showed as the proposed approaches allow to quantify the degree of polishing and to identify surface defects related to the intrinsic characteristics of the stone and/or the performed working actions.

  11. Crowdsourcing quality control for Dark Energy Survey images

    DOE PAGESBeta

    Melchior, P.

    2016-07-01

    We have developed a crowdsourcing web application for image quality controlemployed by the Dark Energy Survey. Dubbed the "DES exposure checker", itrenders science-grade images directly to a web browser and allows users to markproblematic features from a set of predefined classes. Users can also generatecustom labels and thus help identify previously unknown problem classes. Userreports are fed back to hardware and software experts to help mitigate andeliminate recognized issues. We report on the implementation of the applicationand our experience with its over 100 users, the majority of which areprofessional or prospective astronomers but not data management experts. Wediscuss aspects ofmore » user training and engagement, and demonstrate how problemreports have been pivotal to rapidly correct artifacts which would likely havebeen too subtle or infrequent to be recognized otherwise. We conclude with anumber of important lessons learned, suggest possible improvements, andrecommend this collective exploratory approach for future astronomical surveysor other extensive data sets with a sufficiently large user base. We alsorelease open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less

  12. Crowdsourcing quality control for Dark Energy Survey images

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Sheldon, E.; Drlica-Wagner, A.; Rykoff, E. S.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Jarvis, M.; Kuehn, K.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Wester, W.; Zhang, Y.

    2016-07-01

    We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo version at http://des-exp-checker.pmelchior.net.

  13. No-reference image quality assessment based on nonsubsample shearlet transform and natural scene statistics

    NASA Astrophysics Data System (ADS)

    Wang, Guan-jun; Wu, Zhi-yong; Yun, Hai-jiao; Cui, Ming

    2016-03-01

    A novel no-reference (NR) image quality assessment (IQA) method is proposed for assessing image quality across multifarious distortion categories. The new method transforms distorted images into the shearlet domain using a non-subsample shearlet transform (NSST), and designs the image quality feature vector to describe images utilizing natural scenes statistical features: coefficient distribution, energy distribution and structural correlation ( SC) across orientations and scales. The final image quality is achieved from distortion classification and regression models trained by a support vector machine (SVM). The experimental results on the LIVE2 IQA database indicate that the method can assess image quality effectively, and the extracted features are susceptive to the category and severity of distortion. Furthermore, our proposed method is database independent and has a higher correlation rate and lower root mean squared error ( RMSE) with human perception than other high performance NR IQA methods.

  14. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    PubMed

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  15. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images

    PubMed Central

    Kim, Kwang-Min; Son, Kilho; Palmore, G. Tayhas R.

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  16. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  17. Asbestos/NESHAP adequately wet guidance

    SciTech Connect

    Shafer, R.; Throwe, S.; Salgado, O.; Garlow, C.; Hoerath, E.

    1990-12-01

    The Asbestos NESHAP requires facility owners and/or operators involved in demolition and renovation activities to control emissions of particulate asbestos to the outside air because no safe concentration of airborne asbestos has ever been established. The primary method used to control asbestos emissions is to adequately wet the Asbestos Containing Material (ACM) with a wetting agent prior to, during and after demolition/renovation activities. The purpose of the document is to provide guidance to asbestos inspectors and the regulated community on how to determine if friable ACM is adequately wet as required by the Asbestos NESHAP.

  18. Optoacoustic imaging quality enhancement based on geometrical super-resolution method

    NASA Astrophysics Data System (ADS)

    He, Hailong; Mandal, Subhamoy; Buehler, Andreas; Deán-Ben, X. Luís.; Razansky, Daniel; Ntziachristos, Vasilis

    2016-03-01

    In optoacoustic imaging, the resolution and image quality in a certain imaging position usually cannot be enhanced without changing the imaging configuration. Post-reconstruction image processing methods offer a new possibility to improve image quality and resolution. We have developed a geometrical super-resolution (GSR) method which uses information from spatially separated frames to enhance resolution and contrast in optoacoustic images. The proposed method acquires several low resolution images from the same object located at different positions inside the imaging plane. Thereafter, it applies an iterative registration algorithm to integrate the information in the acquired set of images to generate a single high resolution image. Herein, we present the method and evaluate its performance in simulation and phantom experiments, and results show that geometrical super-resolution techniques can be a promising alternative to enhance resolution in optoacoustic imaging.

  19. Quality Imaging - Comparison of CR Mammography with Screen-Film Mammography

    SciTech Connect

    Gaona, E.; Azorin Nieto, J.; Iran Diaz Gongora, J. A.; Arreola, M.; Casian Castellanos, G.; Perdigon Castaneda, G. M.; Franco Enriquez, J. G.

    2006-09-08

    The aim of this work is a quality imaging comparison of CR mammography images printed to film by a laser printer with screen-film mammography. A Giotto and Elscintec dedicated mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in screen-film mammography. Four CR mammography units from two different manufacturers and three dedicated x-ray mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in CR mammography. The tests quality image included an assessment of system resolution, scoring phantom images, Artifacts, mean optical density and density difference (contrast). In this study, screen-film mammography with a quality control program offers a significantly greater level of quality image relative to CR mammography images printed on film.

  20. Adequate supervision for children and adolescents.

    PubMed

    Anderst, James; Moffatt, Mary

    2014-11-01

    Primary care providers (PCPs) have the opportunity to improve child health and well-being by addressing supervision issues before an injury or exposure has occurred and/or after an injury or exposure has occurred. Appropriate anticipatory guidance on supervision at well-child visits can improve supervision of children, and may prevent future harm. Adequate supervision varies based on the child's development and maturity, and the risks in the child's environment. Consideration should be given to issues as wide ranging as swimming pools, falls, dating violence, and social media. By considering the likelihood of harm and the severity of the potential harm, caregivers may provide adequate supervision by minimizing risks to the child while still allowing the child to take "small" risks as needed for healthy development. Caregivers should initially focus on direct (visual, auditory, and proximity) supervision of the young child. Gradually, supervision needs to be adjusted as the child develops, emphasizing a safe environment and safe social interactions, with graduated independence. PCPs may foster adequate supervision by providing concrete guidance to caregivers. In addition to preventing injury, supervision includes fostering a safe, stable, and nurturing relationship with every child. PCPs should be familiar with age/developmentally based supervision risks, adequate supervision based on those risks, characteristics of neglectful supervision based on age/development, and ways to encourage appropriate supervision throughout childhood. PMID:25369578

  1. Small Rural Schools CAN Have Adequate Curriculums.

    ERIC Educational Resources Information Center

    Loustaunau, Martha

    The small rural school's foremost and largest problem is providing an adequate curriculum for students in a changing world. Often the small district cannot or is not willing to pay the per-pupil cost of curriculum specialists, specialized courses using expensive equipment no more than one period a day, and remodeled rooms to accommodate new…

  2. Funding the Formula Adequately in Oklahoma

    ERIC Educational Resources Information Center

    Hancock, Kenneth

    2015-01-01

    This report is a longevity, simulational study that looks at how the ratio of state support to local support effects the number of school districts that breaks the common school's funding formula which in turns effects the equity of distribution to the common schools. After nearly two decades of adequately supporting the funding formula, Oklahoma…

  3. Full-reference quality estimation for images with different spatial resolutions.

    PubMed

    Demirtas, Ali Murat; Reibman, Amy R; Jafarkhani, Hamid

    2014-05-01

    Multimedia communication is becoming pervasive because of the progress in wireless communications and multimedia coding. Estimating the quality of the visual content accurately is crucial in providing satisfactory service. State of the art visual quality assessment approaches are effective when the input image and reference image have the same resolution. However, finding the quality of an image that has spatial resolution different than that of the reference image is still a challenging problem. To solve this problem, we develop a quality estimator (QE), which computes the quality of the input image without resampling the reference or the input images. In this paper, we begin by identifying the potential weaknesses of previous approaches used to estimate the quality of experience. Next, we design a QE to estimate the quality of a distorted image with a lower resolution compared with the reference image. We also propose a subjective test environment to explore the success of the proposed algorithm in comparison with other QEs. When the input and test images have different resolutions, the subjective tests demonstrate that in most cases the proposed method works better than other approaches. In addition, the proposed algorithm also performs well when the reference image and the test image have the same resolution. PMID:24686279

  4. On the Difference between Seeing and Image Quality: When the Turbulence Outer Scale Enters the Game

    NASA Astrophysics Data System (ADS)

    Martinez, P.; Kolb, J.; Sarazin, M.; Tokovinin, A.

    2010-09-01

    We attempt to clarify the frequent confusion between seeing and image quality for large telescopes. The full width at half maximum of a stellar image is commonly considered to be equal to the atmospheric seeing. However the outer scale of the turbulence, which corresponds to a reduction in the low frequency content of the phase perturbation spectrum, plays a significant role in the improvement of image quality at the focus of a telescope. The image quality is therefore different (and in some cases by a large factor) from the atmospheric seeing that can be measured by dedicated seeing monitors, such as a differential image motion monitor.

  5. A comparison of Image Quality Models and Metrics Predicting Object Detection

    NASA Technical Reports Server (NTRS)

    Rohaly, Ann Marie; Ahumada, Albert J., Jr.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models and metrics for image quality predict image discriminability, the visibility of the difference between a pair of images. Some image quality applications, such as the quality of imaging radar displays, are concerned with object detection and recognition. Object detection involves looking for one of a large set of object sub-images in a large set of background images and has been approached from this general point of view. We find that discrimination models and metrics can predict the relative detectability of objects in different images, suggesting that these simpler models may be useful in some object detection and recognition applications. Here we compare three alternative measures of image discrimination, a multiple frequency channel model, a single filter model, and RMS error.

  6. Quantitative and qualitative image quality analysis of super resolution images from a low cost scanning laser ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Echegaray, Sebastian; Zamora, Gilberto; Soliz, Peter; Bauman, Wendall

    2011-03-01

    The lurking epidemic of eye diseases caused by diabetes and aging will put more than 130 million Americans at risk of blindness by 2020. Screening has been touted as a means to prevent blindness by identifying those individuals at risk. However, the cost of most of today's commercial retinal imaging devices makes their use economically impractical for mass screening. Thus, low cost devices are needed. With these devices, low cost often comes at the expense of image quality with high levels of noise and distortion hindering the clinical evaluation of those retinas. A software-based super resolution (SR) reconstruction methodology that produces images with improved resolution and quality from multiple low resolution (LR) observations is introduced. The LR images are taken with a low-cost Scanning Laser Ophthalmoscope (SLO). The non-redundant information of these LR images is combined to produce a single image in an implementation that also removes noise and imaging distortions while preserving fine blood vessels and small lesions. The feasibility of using the resulting SR images for screening of eye diseases was tested using quantitative and qualitative assessments. Qualitatively, expert image readers evaluated their ability of detecting clinically significant features on the SR images and compared their findings with those obtained from matching images of the same eyes taken with commercially available high-end cameras. Quantitatively, measures of image quality were calculated from SR images and compared to subject-matched images from a commercial fundus imager. Our results show that the SR images have indeed enough quality and spatial detail for screening purposes.

  7. A Multivariate Model for Coastal Water Quality Mapping Using Satellite Remote Sensing Images

    PubMed Central

    Su, Yuan-Fong; Liou, Jun-Jih; Hou, Ju-Chen; Hung, Wei-Chun; Hsu, Shu-Mei; Lien, Yi-Ting; Su, Ming-Daw; Cheng, Ke-Sheng; Wang, Yeng-Fung

    2008-01-01

    This study demonstrates the feasibility of coastal water quality mapping using satellite remote sensing images. Water quality sampling campaigns were conducted over a coastal area in northern Taiwan for measurements of three water quality variables including Secchi disk depth, turbidity, and total suspended solids. SPOT satellite images nearly concurrent with the water quality sampling campaigns were also acquired. A spectral reflectance estimation scheme proposed in this study was applied to SPOT multispectral images for estimation of the sea surface reflectance. Two models, univariate and multivariate, for water quality estimation using the sea surface reflectance derived from SPOT images were established. The multivariate model takes into consideration the wavelength-dependent combined effect of individual seawater constituents on the sea surface reflectance and is superior over the univariate model. Finally, quantitative coastal water quality mapping was accomplished by substituting the pixel-specific spectral reflectance into the multivariate water quality estimation model.

  8. Sentinel-2 radiometric image quality commissioning: first results

    NASA Astrophysics Data System (ADS)

    Lachérade, S.; Lonjou, V.; Farges, M.; Gamet, P.; Marcq, S.; Raynaud, J.-L.; Trémas, T.

    2015-10-01

    In partnership with the European Commission and in the frame of the Copernicus program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. Sentinel-2 offers a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high spatial resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infrared domains). The first satellite, Sentinel-2A, has been launched in June 2015. The Sentinel-2A Commissioning Phase starts immediately after the Launch and Early Orbit Phase and continues until the In-Orbit Commissioning Review which is planned three months after the launch. The Centre National d'Etudes Spatiales (CNES) supports ESA/ESTEC to insure the Calibration/Validation commissioning phase during the first three months in flight. This paper provides first an overview of the Sentinel-2 system and a description of the products delivered by the ground segment associated to the main radiometric specifications to achieve. Then the paper focuses on the preliminary radiometric results obtained during the in-flight commissioning phase. The radiometric methods and calibration sites used in the CNES image quality center to reach the specifications of the sensor are described. A status of the Sentinel-2A radiometric performances at the end of the first three months after the launch is presented. We will particularly address in this paper the results in term of absolute calibration, pixel to pixel relative sensitivity and MTF estimation.

  9. The Osteoarthritis Initiative (OAI) magnetic resonance imaging quality assurance update

    PubMed Central

    Schneider, E.; NessAiver, M.

    2012-01-01

    Objective Longitudinal quantitative evaluation of cartilage disease requires reproducible measurements over time. We report 8 years of quality assurance (QA) metrics for quantitative magnetic resonance (MR) knee analyses from the Osteoarthritis Initiative (OAI) and show the impact of MR system, phantom, and acquisition protocol changes. Method Key 3 T MR QA metrics, including signal-to-noise, signal uniformity, T2 relaxation times, and geometric distortion, were quantified monthly on two different phantoms using an automated program. Results Over 8 years, phantom measurements showed root-mean-square coefficient-of-variation reproducibility of <0.25% (190.0 mm diameter) and <0.20% (148.0 mm length), resulting in spherical volume reproducibility of <0.35%. T2 relaxation time reproducibility varied from 1.5% to 5.3%; seasonal fluctuations were observed at two sites. All other QA goals were met except: slice thicknesses were consistently larger than nominal on turbo spin echo images; knee coil signal uniformity and signal level varied significantly over time. Conclusions The longitudinal variations for a spherical volume should have minimal impact on the accuracy and reproducibility of cartilage volume and thickness measurements as they are an order of magnitude smaller than reported for either unpaired or paired (repositioning and reanalysis) precision errors. This stability should enable direct comparison of baseline and follow-up images. Cross-comparison of the geometric results from all four OAI sites reveal that the MR systems do not statistically differ and enable results to be pooled. MR QA results identified similar technical issues as previously published. Geometric accuracy stability should have the greatest impact on quantitative analysis of longitudinal change in cartilage volume and thickness precision. PMID:23092792

  10. Spectral CT Imaging of Laryngeal and Hypopharyngeal Squamous Cell Carcinoma: Evaluation of Image Quality and Status of Lymph Nodes

    PubMed Central

    Li, Wei; Wang, Zhongzhou; Pang, Tao; Li, Jun; Shi, Hao; Zhang, Chengqi

    2013-01-01

    Purpose The purpose of this study was to evaluate image quality and status of lymph nodes in laryngeal and hypopharyngeal squamous cell carcinoma (SCC) patients using spectral CT imaging. Materials and Methods Thirty-eight patients with laryngeal and hypopharyngeal SCCs were scanned with spectral CT mode in venous phase. The conventional 140-kVp polychromatic images and one hundred and one sets of monochromatic images were generated ranging from 40 keV to 140 keV. The mean optimal keV was calculated on the monochromatic images. The image quality of the mean optimal keV monochromatic images and polychromatic images was compared with two different methods including a quantitative analysis method and a qualitative analysis method. The HU curve slope (λHU) in the target lymph nodes and the primary lesion was calculated respectively. The ratio of λHU was studied between metastatic and non-metastatic lymph nodes group. Results A total of 38 primary lesions were included. The mean optimal keV was obtained at 55±1.77 keV on the monochromatic images. The image quality evaluated by two different methods including a quantitative analysis method and a qualitative analysis method was obviously increased on monochromatic images than polychromatic images (p<0.05). The ratio of λHU between metastatic and non-metastatic lymph nodes was significantly different in the venous phase images (p<0.05). Conclusion The monochromatic images obtained with spectral CT can be used to improve the image quality of laryngeal and hypopharyngeal SCC and the N-staging accuracy. The quantitative ratio of λHU may be helpful for differentiating between metastatic and non-metastatic cervical lymph nodes. PMID:24386214

  11. Objective Image-Quality Assessment for High-Resolution Photospheric Images by Median Filter-Gradient Similarity

    NASA Astrophysics Data System (ADS)

    Deng, Hui; Zhang, Dandan; Wang, Tianyu; Ji, Kaifan; Wang, Feng; Liu, Zhong; Xiang, Yongyuan; Jin, Zhenyu; Cao, Wenda

    2015-05-01

    All next-generation ground-based and space-based solar telescopes require a good quality-assessment metric to evaluate their imaging performance. In this paper, a new image quality metric, the median filter-gradient similarity (MFGS) is proposed for photospheric images. MFGS is a no-reference/blind objective image-quality metric (IQM) by a measurement result between 0 and 1 and has been performed on short-exposure photospheric images captured by the New Vacuum Solar Telescope (NVST) of the Fuxian Solar Observatory and by the Solar Optical Telescope (SOT) onboard the Hinode satellite, respectively. The results show that (1) the measured value of the MFGS changes monotonically from 1 to 0 with degradation of image quality; (2) there exists a linear correlation between the measured values of the MFGS and the root-mean-square contrast (RMS-contrast) of the granulation; (3) the MFGS is less affected by the image contents than the granular RMS-contrast. Overall, the MFGS is a good alternative for the quality assessment of photospheric images.

  12. Determination of pork quality attributes using hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Qiao, Jun; Wang, Ning; Ngadi, M. O.; Gunenc, Aynur

    2005-11-01

    Meat grading has always been a research topic because of large variations among meat products. Many subjective assessment methods with poor repeatability and tedious procedures are still widely used in meat industry. In this study, a hyperspectral-imaging-based technique was developed to achieve fast, accurate, and objective determination of pork quality attributes. The system was able to extract the spectral and spatial characteristics for simultaneous determination of drip loss and pH in pork meat. Two sets of six significant feature wavelengths were selected for predicting the drip loss (590, 645, 721, 752, 803 and 850 nm) and pH (430, 448, 470, 890, 980 and 999 nm). Two feed-forward neural network models were developed. The results showed that the correlation coefficient (r) between the predicted and actual drip loss and pH were 0.71, and 0.58, respectively, by Model 1 and 0.80 for drip loss and 0.67 for pH by Model 2. The color levels of meat samples were also mapped successfully based on a digitalized Meat Color Standard.

  13. Standardized methods for assessing the imaging quality of intraocular lenses

    NASA Astrophysics Data System (ADS)

    Norrby, N. E. Sverker

    1995-11-01

    The relative merits of three standardized methods for assessing the imaging quality of intraocular lenses are discussed based on theoretical modulation-transfer-function calculations. The standards are ANSI Z80.7 1984 from the American National Standards Institute, now superseded by ANSI Z80.7 1994, and the proposed ISO 11979-2 from the International Organization for Standardization. They entail different test 60% resolution efficiency in air, 70% resolutionefficiency in aqueous humor, and 0.43 modulation at 100 line pairs/mm in a model eye. The ISO working group found that the latter corresponds to 60% resolution efficiency in air in a ring test among eight laboratories on a sample of 39 poly(methyl) methacrylate lenses and four silicone lenses spanning the power (in aqueous humor) range of 10-30 D. In both ANSI Z80.7 1994 and ISO 11979-2, a 60% resolution efficiency in air remains an optional approval limit. It is concluded that the ISO configuration is preferred, because it puts the intraocular lens into the context of the optics of the eye. Note that the ISO standard is tentative and is currently being voted on.

  14. Standardized methods for assessing the imaging quality of intraocular lenses.

    PubMed

    Norrby, N E

    1995-11-01

    The relative merits of three standardized methods for assessing the imaging quality of intraocular lenses are discussed based on theoretical modulation-transfer-function calculations. The standards are ANSI Z80.7 1984 from the American National Standards Institute, now superseded by ANSI Z80.7 1994, and the proposed ISO 11979-2 from the International Organization for Standardization. They entail different test configurations and approval limits, respectively: 60% resolution efficiency in air, 70% resolution efficiency in aqueous humor, and 0.43 modulation at 100 line pairs/mm in a model eye. The ISO working group found that the latter corresponds to 60% resolution efficiency in air in a ring test among eight laboratories on a sample of 39 poly(methyl) methacrylate lenses and four silicone lenses spanning the power (in aqueous humor) range of 10-30 D. In both ANSI Z80.7 1994 and ISO 11979-2, a 60% resolution efficiency in air remains an optional approval limit. It is concluded that the ISO configuration is preferred, because it puts the intraocular lens into the context of the optics of the eye. Note that the ISO standard is tentative and is currently being voted on. PMID:21060604

  15. High-quality remote interactive imaging in the operating theatre

    NASA Astrophysics Data System (ADS)

    Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan

    2009-02-01

    We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.

  16. Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images

    NASA Astrophysics Data System (ADS)

    Deng, Yuanbo; Chu, Daping

    2016-03-01

    The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.

  17. [Image quality evaluation of new image reconstruction methods applying the iterative reconstruction].

    PubMed

    Takata, Tadanori; Ichikawa, Katsuhiro; Hayashi, Hiroyuki; Mitsui, Wataru; Sakuta, Keita; Koshida, Haruka; Yokoi, Tomohiro; Matsubara, Kousuke; Horii, Jyunsei; Iida, Hiroji

    2012-01-01

    The purpose of this study was to evaluate the image quality of an iterative reconstruction method, the iterative reconstruction in image space (IRIS), which was implemented in a 128-slices multi-detector computed tomography system (MDCT), Siemens Somatom Definition Flash (Definition). We evaluated image noise by standard deviation (SD) as many researchers did before, and in addition, we measured modulation transfer function (MTF), noise power spectrum (NPS), and perceptual low-contrast detectability using a water phantom including a low-contrast object with a 10 Hounsfield unit (HU) contrast, to evaluate whether the noise reduction of IRIS was effective. The SD and NPS were measured from the images of a water phantom. The MTF was measured from images of a thin metal wire and a bar pattern phantom with the bar contrast of 125 HU. The NPS of IRIS was lower than that of filtered back projection (FBP) at middle and high frequency regions. The SD values were reduced by 21%. The MTF of IRIS and FBP measured by the wire phantom coincided precisely. However, for the bar pattern phantom, the MTF values of IRIS at 0.625 and 0.833 cycle/mm were lower than those of FBP. Despite the reduction of the SD and the NPS, the low-contrast detectability study indicated no significant difference between IRIS and FBP. From these results, it was demonstrated that IRIS had the noise reduction performance with exact preservation for high contrast resolution and slight degradation of middle contrast resolution, and could slightly improve the low contrast detectability but with no significance. PMID:22516592

  18. Using a NPWE model observer to assess suitable image quality for a digital mammography quality assurance programme.

    PubMed

    Monnin, P; Bochud, F O; Verdun, F R

    2010-01-01

    A method of objectively determining imaging performance for a mammography quality assurance programme for digital systems was developed. The method is based on the assessment of the visibility of a spherical microcalcification of 0.2 mm using a quasi-ideal observer model. It requires the assessment of the spatial resolution (modulation transfer function) and the noise power spectra of the systems. The contrast is measured using a 0.2-mm thick Al sheet and Polymethylmethacrylate (PMMA) blocks. The minimal image quality was defined as that giving a target contrast-to-noise ratio (CNR) of 5.4. Several evaluations of this objective method for evaluating image quality in mammography quality assurance programmes have been considered on computed radiography (CR) and digital radiography (DR) mammography systems. The measurement gives a threshold CNR necessary to reach the minimum standard image quality required with regards to the visibility of a 0.2-mm microcalcification. This method may replace the CDMAM image evaluation and simplify the threshold contrast visibility test used in mammography quality. PMID:20395413

  19. Psychophysical evaluation of the image quality of a dynamic flat-panel digital x-ray image detector using the threshold contrast detail detectability (TCDD) technique

    NASA Astrophysics Data System (ADS)

    Davies, Andrew G.; Cowen, Arnold R.; Bruijns, Tom J. C.

    1999-05-01

    We are currently in an era of active development of the digital X-ray imaging detectors that will serve the radiological communities in the new millennium. The rigorous comparative physical evaluations of such devices are therefore becoming increasingly important from both the technical and clinical perspectives. The authors have been actively involved in the evaluation of a clinical demonstration version of a flat-panel dynamic digital X-ray image detector (or FDXD). Results of objective physical evaluation of this device have been presented elsewhere at this conference. The imaging performance of FDXD under radiographic exposure conditions have been previously reported, and in this paper a psychophysical evaluation of the FDXD detector operating under continuous fluoroscopic conditions is presented. The evaluation technique employed was the threshold contrast detail detectability (TCDD) technique, which enables image quality to be measured on devices operating in the clinical environment. This approach addresses image quality in the context of both the image acquisition and display processes, and uses human observers to measure performance. The Leeds test objects TO[10] and TO[10+] were used to obtain comparative measurements of performance on the FDXD and two digital spot fluorography (DSF) systems, one utilizing a Plumbicon camera and the other a state of the art CCD camera. Measurements were taken at a range of detector entrance exposure rates, namely 6, 12, 25 and 50 (mu) R/s. In order to facilitate comparisons between the systems, all fluoroscopic image processing such as noise reduction algorithms, were disabled during the experiments. At the highest dose rate FDXD significantly outperformed the DSF comparison systems in the TCDD comparisons. At 25 and 12 (mu) R/s all three-systems performed in an equivalent manner and at the lowest exposure rate FDXD was inferior to the two DSF systems. At standard fluoroscopic exposures, FDXD performed in an equivalent

  20. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  1. Comparison of image compression techniques for high quality based on properties of visual perception

    NASA Astrophysics Data System (ADS)

    Algazi, V. Ralph; Reed, Todd R.

    1991-12-01

    The growing interest and importance of high quality imaging has several roots: Imaging and graphics, or more broadly multimedia, as the predominant means of man-machine interaction on computers, and the rapid maturing of advanced television technology. Because of their economic importance, proposed advanced television standards are being discussed and evaluated for rapid adoption. These advanced standards are based on well known image compression techniques, used for very low bit rate video communications as well. In this paper, we examine the expected improvement in image quality that advanced television and imaging techniques should bring about. We then examine and discuss the data compression techniques which are commonly used, to determine if they are capable of providing the achievable gain in quality, and to assess some of their limitations. We also discuss briefly the potential of these techniques for very high quality imaging and display applications, which extend beyond the range of existing and proposed television standards.

  2. Evaluation of image quality of MRI data for brain tumor surgery

    NASA Astrophysics Data System (ADS)

    Heckel, Frank; Arlt, Felix; Geisler, Benjamin; Zidowitz, Stephan; Neumuth, Thomas

    2016-03-01

    3D medical images are important components of modern medicine. Their usefulness for the physician depends on their quality, though. Only high-quality images allow accurate and reproducible diagnosis and appropriate support during treatment. We have analyzed 202 MRI images for brain tumor surgery in a retrospective study. Both an experienced neurosurgeon and an experienced neuroradiologist rated each available image with respect to its role in the clinical workflow, its suitability for this specific role, various image quality characteristics, and imaging artifacts. Our results show that MRI data acquired for brain tumor surgery does not always fulfill the required quality standards and that there is a significant disagreement between the surgeon and the radiologist, with the surgeon being more critical. Noise, resolution, as well as the coverage of anatomical structures were the most important criteria for the surgeon, while the radiologist was mainly disturbed by motion artifacts.

  3. Perceptual difference paradigm for analyzing image quality of fast MRI techniques

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Salem, Kyle A.; Huo, Donglai; Duerk, Jeffrey L.

    2003-05-01

    We are developing a method to objectively quantify image quality and applying it to the optimization of fast magnetic resonance imaging methods. In MRI, to capture the details of a dynamic process, it is critical to have both high temporal and spatial resolution. However, there is typically a trade-off between the two, making the sequence engineer choose to optimize imaging speed or spatial resolution. In response to this problem, a number of different fast MRI techniques have been proposed. To evaluate different fast MRI techniques quantitatively, we use a perceptual difference model (PDM) that incorporates various components of the human visual system. The PDM was validated using subjective image quality ratings by naive observers and task-based measures as defined by radiologists. Using the PDM, we investigated the effects of various imaging parameters on image quality and quantified the degradation due to novel imaging techniques including keyhole, keyhole Dixon fat suppression, and spiral imaging. Results have provided significant information about imaging time versus quality tradeoffs aiding the MR sequence engineer. The PDM has been shown to be an objective tool for measuring image quality and can be used to determine the optimal methodology for various imaging applications.

  4. Digital mammography--DQE versus optimized image quality in clinical environment: an on site study

    NASA Astrophysics Data System (ADS)

    Oberhofer, Nadia; Fracchetti, Alessandro; Springeth, Margareth; Moroder, Ehrenfried

    2010-04-01

    The intrinsic quality of the detection system of 7 different digital mammography units (5 direct radiography DR; 2 computed radiography CR), expressed by DQE, has been compared with their image quality/dose performances in clinical use. DQE measurements followed IEC 62220-1-2 using a tungsten test object for MTF determination. For image quality assessment two different methods have been applied: 1) measurement of contrast to noise ratio (CNR) according to the European guidelines and 2) contrast-detail (CD) evaluation. The latter was carried out with the phantom CDMAM ver. 3.4 and the commercial software CDMAM Analyser ver. 1.1 (both Artinis) for automated image analysis. The overall image quality index IQFinv proposed by the software has been validated. Correspondence between the two methods has been shown figuring out a linear correlation between CNR and IQFinv. All systems were optimized with respect to image quality and average glandular dose (AGD) within the constraints of automatic exposure control (AEC). For each equipment, a good image quality level was defined by means of CD analysis, and the corresponding CNR value considered as target value. The goal was to achieve for different PMMA-phantom thicknesses constant image quality, that means the CNR target value, at minimum dose. All DR systems exhibited higher DQE and significantly better image quality compared to CR systems. Generally switching, where available, to a target/filter combination with an x-ray spectrum of higher mean energy permitted dose savings at equal image quality. However, several systems did not allow to modify the AEC in order to apply optimal radiographic technique in clinical use. The best ratio image quality/dose was achieved by a unit with a-Se detector and W anode only recently available on the market.

  5. Lesion insertion in projection domain for computed tomography image quality assessment

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Ma, Chi; Yu, Zhicong; Leng, Shuai; Yu, Lifeng; McCollough, Cynthia

    2015-03-01

    To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way to achieve this objective is to create hybrid images that combine patient images with simulated lesions. Because conventional hybrid images generated in the image-domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Liver lesion models were forward projected according to the geometry of a commercial CT scanner to acquire lesion projections. The lesion projections were then inserted into patient projections (decoded from commercial CT raw data with the assistance of the vendor) and reconstructed to acquire hybrid images. To validate the accuracy of the forward projection geometry, simulated images reconstructed from the forward projections of a digital ACR phantom were compared to physically acquired ACR phantom images. To validate the hybrid images, lesion models were inserted into patient images and visually assessed. Results showed that the simulated phantom images and the physically acquired phantom images had great similarity in terms of HU accuracy and high-contrast resolution. The lesions in the hybrid image had a realistic appearance and merged naturally into the liver background. In addition, the inserted lesion demonstrated reconstruction-parameter-dependent appearance. Compared to conventional image-domain approach, our method enables more realistic hybrid images for image quality assessment.

  6. Improving Appropriateness and Quality in Cardiovascular Imaging: A Review of the Evidence.

    PubMed

    Bhattacharyya, Sanjeev; Lloyd, Guy

    2015-12-01

    High-quality cardiovascular imaging requires a structured process to ensure appropriate patient selection, accurate and reproducible data acquisition, and timely reporting which answers clinical questions and improves patient outcomes. Several guidelines provide frameworks to assess quality. This article reviews interventions to improve quality in cardiovascular imaging, including methods to reduce inappropriate testing, improve accuracy, reduce interobserver variability, and reduce diagnostic and reporting errors. PMID:26628582

  7. Reducing radiation dose without compromising image quality in preoperative perforator flap imaging with CTA using ASIR technology.

    PubMed

    Niumsawatt, Vachara; Debrotwir, Andrew N; Rozen, Warren Matthew

    2014-01-01

    Computed tomographic angiography (CTA) has become a mainstay in preoperative perforator flap planning in the modern era of reconstructive surgery. However, the increased use of CTA does raise the concern of radiation exposure to patients. Several techniques have been developed to decrease radiation dosage without compromising image quality, with varying results. The most recent advance is in the improvement of image reconstruction using an adaptive statistical iterative reconstruction (ASIR) algorithm. We sought to evaluate the image quality of ASIR in preoperative deep inferior epigastric perforator (DIEP) flap surgery, through a direct comparison with conventional filtered back projection (FBP) images. A prospective review of 60 consecutive ASIR and 60 consecutive FBP CTA images using similar protocol (except for radiation dosage) was undertaken, analyzed by 2 independent reviewers. In both groups, we were able to accurately identify axial arteries and their perforators. Subjective analysis of image quality demonstrated no statistically significant difference between techniques. ASIR can thus be used for preoperative imaging with similar image quality to FBP, but with a 60% reduction in radiation delivery to patients. PMID:25058789

  8. Reducing Radiation Dose Without Compromising Image Quality in Preoperative Perforator Flap Imaging With CTA Using ASIR Technology

    PubMed Central

    Niumsawatt, Vachara; Debrotwir, Andrew N.; Rozen, Warren Matthew

    2014-01-01

    Computed tomographic angiography (CTA) has become a mainstay in preoperative perforator flap planning in the modern era of reconstructive surgery. However, the increased use of CTA does raise the concern of radiation exposure to patients. Several techniques have been developed to decrease radiation dosage without compromising image quality, with varying results. The most recent advance is in the improvement of image reconstruction using an adaptive statistical iterative reconstruction (ASIR) algorithm. We sought to evaluate the image quality of ASIR in preoperative deep inferior epigastric perforator (DIEP) flap surgery, through a direct comparison with conventional filtered back projection (FBP) images. A prospective review of 60 consecutive ASIR and 60 consecutive FBP CTA images using similar protocol (except for radiation dosage) was undertaken, analyzed by 2 independent reviewers. In both groups, we were able to accurately identify axial arteries and their perforators. Subjective analysis of image quality demonstrated no statistically significant difference between techniques. ASIR can thus be used for preoperative imaging with similar image quality to FBP, but with a 60% reduction in radiation delivery to patients. PMID:25058789

  9. Exposure reduction and image quality in orthodontic radiology: a review of the literature

    SciTech Connect

    Taylor, T.S.; Ackerman, R.J. Jr.; Hardman, P.K.

    1988-01-01

    This article summarizes the use of rare earth screen technology to achieve high-quality panoramic and cephalometric radiographs with sizable reductions in patient radiation dosage. Collimation, shielding, quality control, and darkroom procedures are reviewed to further reduce patient risk and improve image quality. 34 references.

  10. Recent developments in hyperspectral imaging for assessment of food quality and safety.

    PubMed

    Huang, Hui; Liu, Li; Ngadi, Michael O

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  11. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety

    PubMed Central

    Huang, Hui; Liu, Li; Ngadi, Michael O.

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  12. An electron beam imaging system for quality assurance in IORT

    NASA Astrophysics Data System (ADS)

    Casali, F.; Rossi, M.; Morigi, M. P.; Brancaccio, R.; Paltrinieri, E.; Bettuzzi, M.; Romani, D.; Ciocca, M.; Tosi, G.; Ronsivalle, C.; Vignati, M.

    2004-01-01

    Intraoperative radiation therapy is a special radiotherapy technique, which enables a high dose of radiation to be given in a single fraction during oncological surgery. The major stumbling block to the large-scale application of the technique is the transfer of the patient, with an open wound, from the operating room to the radiation therapy bunker, with the consequent organisational problems and the increased risk of infection. To overcome these limitations, in the last few years a new kind of linear accelerator, the Novac 7, conceived for direct use in the surgical room, has become available. Novac 7 can deliver electron beams of different energies (3, 5, 7 and 9 MeV), with a high dose rate (up to 20 Gy/min). The aim of this work, funded by ENEA in the framework of a research contract, is the development of an innovative system for on-line measurements of 2D dose distributions and electron beam characterisation, before radiotherapy treatment with Novac 7. The system is made up of the following components: (a) an electron-light converter; (b) a 14 bit cooled CCD camera; (c) a personal computer with an ad hoc written software for image acquisition and processing. The performances of the prototype have been characterised experimentally with different electron-light converters. Several tests have concerned the assessment of the detector response as a function of impulse number and electron beam energy. Finally, the experimental results concerning beam profiles have been compared with data acquired with other dosimetric techniques. The achieved results make it possible to say that the developed system is suitable for fast quality assurance measurements and verification of 2D dose distributions.

  13. Image Quality Analysis of Eyes Undergoing LASER Refractive Surgery

    PubMed Central

    Sarkar, Samrat; Vaddavalli, Pravin Krishna; Bharadwaj, Shrikant R.

    2016-01-01

    Laser refractive surgery for myopia increases the eye’s higher-order wavefront aberrations (HOA’s). However, little is known about the impact of such optical degradation on post-operative image quality (IQ) of these eyes. This study determined the relation between HOA’s and IQ parameters (peak IQ, dioptric focus that maximized IQ and depth of focus) derived from psychophysical (logMAR acuity) and computational (logVSOTF) through-focus curves in 45 subjects (18 to 31yrs) before and 1-month after refractive surgery and in 40 age-matched emmetropic controls. Computationally derived peak IQ and its best focus were negatively correlated with the RMS deviation of all HOA’s (HORMS) (r≥-0.5; p<0.001 for all). Computational depth of focus was positively correlated with HORMS (r≥0.55; p<0.001 for all) and negatively correlated with peak IQ (r≥-0.8; p<0.001 for all). All IQ parameters related to logMAR acuity were poorly correlated with HORMS (r≤|0.16|; p>0.16 for all). Increase in HOA’s after refractive surgery is therefore associated with a decline in peak IQ and a persistence of this sub-standard IQ over a larger dioptric range, vis-à-vis, before surgery and in age-matched controls. This optical deterioration however does not appear to significantly alter psychophysical IQ, suggesting minimal impact of refractive surgery on the subject’s ability to resolve spatial details and their tolerance to blur. PMID:26859302

  14. Visible to SWIR hyperspectral imaging for produce safety and quality evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral imaging techniques, combining the advantages of spectroscopy and imaging, have found wider use in food quality and safety evaluation applications during the past decade. In light of the prevalent use of hyperspectral imaging techniques in the visible to near-infrared (VNIR: 400 -1000 n...

  15. How do we watch images? A case of change detection and quality estimation

    NASA Astrophysics Data System (ADS)

    Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte

    2012-01-01

    The most common tasks in subjective image estimation are change detection (a detection task) and image quality estimation (a preference task). We examined how the task influences the gaze behavior when comparing detection and preference tasks. The eye movements of 16 naïve observers were recorded with 8 observers in both tasks. The setting was a flicker paradigm, where the observers see a non-manipulated image, a manipulated version of the image and again the non-manipulated image and estimate the difference they perceived in them. The material was photographic material with different image distortions and contents. To examine the spatial distribution of fixations, we defined the regions of interest using a memory task and calculated information entropy to estimate how concentrated the fixations were on the image plane. The quality task was faster and needed fewer fixations and the first eight fixations were more concentrated on certain image areas than the change detection task. The bottom-up influences of the image also caused more variation to the gaze behavior in the quality estimation task than in the change detection task The results show that the quality estimation is faster and the regions of interest are emphasized more on certain images compared with the change detection task that is a scan task where the whole image is always thoroughly examined. In conclusion, in subjective image estimation studies it is important to think about the task.

  16. Quality evaluation of adaptive optical image based on DCT and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Li, Junwei; Wang, Jing; Deng, Rong; Dong, Yanbing

    2015-04-01

    The adaptive optical telescopes play a more and more important role in the detection system on the ground, and the adaptive optical images are so many that we need find a suitable method of quality evaluation to choose good quality images automatically in order to save human power. It is well known that the adaptive optical images are no-reference images. In this paper, a new logarithmic evaluation method based on the use of the discrete cosine transform(DCT) and Rényi entropy for the adaptive optical images is proposed. Through the DCT using one or two dimension window, the statistical property of Rényi entropy for images is studied. The different directional Rényi entropy maps of an input image containing different information content are obtained. The mean values of different directional Rényi entropy maps are calculated. For image quality evaluation, the different directional Rényi entropy and its standard deviation corresponding to region of interest is selected as an indicator for the anisotropy of the images. The standard deviation of different directional Rényi entropy is obtained as the quality evaluation value for adaptive optical image. Experimental results show the proposed method that the sorting quality matches well with the visual inspection.

  17. No-Reference Image Quality Assessment for ZY3 Imagery in Urban Areas Using Statistical Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Cui, W. H.; Yang, F.; Wu, Z. C.

    2016-06-01

    More and more high-spatial resolution satellite images are produced with the improvement of satellite technology. However, the quality of images is not always satisfactory for application. Due to the impact of complicated atmospheric conditions and complex radiation transmission process in imaging process the images often suffer deterioration. In order to assess the quality of remote sensing images over urban areas, we proposed a general purpose image quality assessment methods based on feature extraction and machine learning. We use two types of features in multi scales. One is from the shape of histogram the other is from the natural scene statistics based on Generalized Gaussian distribution (GGD). A 20-D feature vector for each scale is extracted and is assumed to capture the RS image quality degradation characteristics. We use SVM to learn to predict image quality scores from these features. In order to do the evaluation, we construct a median scale dataset for training and testing with subjects taking part in to give the human opinions of degraded images. We use ZY3 satellite images over Wuhan area (a city in China) to conduct experiments. Experimental results show the correlation of the predicted scores and the subjective perceptions.

  18. SU-E-I-43: Pediatric CT Dose and Image Quality Optimization

    SciTech Connect

    Stevens, G; Singh, R

    2014-06-01

    Purpose: To design an approach to optimize radiation dose and image quality for pediatric CT imaging, and to evaluate expected performance. Methods: A methodology was designed to quantify relative image quality as a function of CT image acquisition parameters. Image contrast and image noise were used to indicate expected conspicuity of objects, and a wide-cone system was used to minimize scan time for motion avoidance. A decision framework was designed to select acquisition parameters as a weighted combination of image quality and dose. Phantom tests were used to acquire images at multiple techniques to demonstrate expected contrast, noise and dose. Anthropomorphic phantoms with contrast inserts were imaged on a 160mm CT system with tube voltage capabilities as low as 70kVp. Previously acquired clinical images were used in conjunction with simulation tools to emulate images at different tube voltages and currents to assess human observer preferences. Results: Examination of image contrast, noise, dose and tube/generator capabilities indicates a clinical task and object-size dependent optimization. Phantom experiments confirm that system modeling can be used to achieve the desired image quality and noise performance. Observer studies indicate that clinical utilization of this optimization requires a modified approach to achieve the desired performance. Conclusion: This work indicates the potential to optimize radiation dose and image quality for pediatric CT imaging. In addition, the methodology can be used in an automated parameter selection feature that can suggest techniques given a limited number of user inputs. G Stevens and R Singh are employees of GE Healthcare.

  19. An image-based technique to assess the perceptual quality of clinical chest radiographs

    SciTech Connect

    Lin Yuan; Luo Hui; Dobbins, James T. III; Page McAdams, H.; Wang, Xiaohui; Sehnert, William J.; Barski, Lori; Foos, David H.; Samei, Ehsan

    2012-11-15

    Purpose: Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of factors such as the capture system modulation transfer function, noise power spectrum, detective quantum efficiency, and the exposure technique. While these elements form the basic underlying components of image quality, when assessing a clinical image, radiologists seldom refer to these factors, but rather examine several specific regions of the displayed patient images, further impacted by a particular image processing method applied, to see whether the image is suitable for diagnosis. In this paper, the authors developed a novel strategy to simulate radiologists' perceptual evaluation process on actual clinical chest images. Methods: Ten regional based perceptual attributes of chest radiographs were determined through an observer study. Those included lung grey level, lung detail, lung noise, rib-lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. Each attribute was characterized in terms of a physical quantity measured from the image algorithmically using an automated process. A pilot observer study was performed on 333 digital chest radiographs, which included 179 PA images with 10:1 ratio grids (set 1) and 154 AP images without grids (set 2), to ascertain the correlation between image perceptual attributes and physical quantitative measurements. To determine the acceptable range of each perceptual attribute, a preliminary quality consistency range was defined based on the preferred 80% of images in set 1. Mean value difference ({mu}{sub 1}-{mu}{sub 2}) and variance ratio ({sigma}{sub 1}{sup 2}/{sigma}{sub 2}{sup 2}) were investigated to further quantify the differences between the selected two image sets. Results: The pilot observer study demonstrated that our regional based physical quantity metrics of chest radiographs correlated very well with

  20. Local homogeneity combined with DCT statistics to blind noisy image quality assessment

    NASA Astrophysics Data System (ADS)

    Yang, Lingxian; Chen, Li; Chen, Heping

    2015-03-01

    In this paper a novel method for blind noisy image quality assessment is proposed. First, it is believed that human visual system (HVS) is more sensitive to the local smoothness area in a noise image, an adaptively local homogeneous block selection algorithm is proposed to construct a new homogeneous image named as homogeneity blocks (HB) based on computing each pixel characteristic. Second, applying the discrete cosine transform (DCT) for each HB and using high frequency component to evaluate image noise level. Finally, a modified peak signal to noise ratio (MPSNR) image quality assessment approach is proposed based on analysis DCT kurtosis distributions change and noise level above-mentioned. Simulations show that the quality scores that produced from the proposed algorithm are well correlated with the human perception of quality and also have a stability performance.

  1. Image quality optimization, via application of contextual contrast sensitivity and discrimination functions

    NASA Astrophysics Data System (ADS)

    Fry, Edward; Triantaphillidou, Sophie; Jarvis, John; Gupta, Gaurav

    2015-01-01

    What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies.

  2. Design and image quality results from volumetric CT with a flat-panel imager

    NASA Astrophysics Data System (ADS)

    Ross, William; Basu, Samit; Edic, Peter M.; Johnson, Mark; Pfoh, Armin H.; Rao, Ramakrishna; Ren, Baorui

    2001-06-01

    Preliminary MTF and LCD results obtained on several volumetric computed tomography (VCT) systems, employing amorphous flat panel technology, are presented. Constructed around 20-cm x 20-cm, 200-mm pitch amorphous silicon x-ray detectors, the prototypes use standard vascular or CT x-ray sources. Data were obtained from closed-gantry, benchtop and C-arm-based topologies, over a full 360 degrees of rotation about the target object. The field of view of the devices is approximately 15 cm, with a magnification of 1.25-1.5, providing isotropic resolution at isocenter of 133-160 mm. Acquisitions have been reconstructed using the FDK algorithm, modified by motion corrections also developed by GE. Image quality data were obtained using both industry standard and custom resolution phantoms as targets. Scanner output is compared on a projection and reconstruction basis against analogous output from a dedicated simulation package, also developed at GE. Measured MTF performance is indicative of a significant advance in isotropic image resolution over commercially available systems. LCD results have been obtained, using industry standard phantoms, spanning a contrast range of 0.3-1%. Both MTF and LCD measurements agree with simulated data.

  3. Exploiting the multiplicative nature of fluoroscopic image stochastic noise to enhance calcium imaging recording quality.

    PubMed

    Esposti, Federico; Ripamonti, Maddalena; Signorini, Maria G

    2009-01-01

    One of the main problems that affect fluoroscopic imaging is the difficulty in coupling the recorded activity with the morphological information. The comprehension of fluorescence events in relationship with the internal structure of the cell can be very difficult. At this purpose, we developed a new method able to maximize the fluoroscopic movie quality. The method (Maximum Intensity Enhancement, MIE) works as follow: considering all the frames that compose the fluoroscopic movie, the algorithm extracts, for each pixel of the matrix, the maximal brightness value assumed along all the frames. Such values are collected in a maximum intensity matrix. Then, the method provides the projection of the target molecule oscillations which are present in the DeltaF/F(0) movie onto the maximum intensity matrix. This is done by creating a RGB movie and by assigning to the normalized (DeltaF/F(0)) activity a single channel and by reproducing the maximum intensity matrix on all the frames by using the remaining color channels. The application of such a method to fluoroscopic calcium imaging of astrocyte cultures demonstrated a meaningful enhancement in the possibility to discern the internal and external structure of cells. PMID:19964305

  4. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  5. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting

    PubMed Central

    Yang, Jiachen; Lin, Yancong; Gao, Zhiqun; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The human visual system (HVS) plays an important role in stereo image quality perception. Therefore, it has aroused many people’s interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF) and weighted multi-scale (MS-SSIM). Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels. PMID:26717412

  6. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting.

    PubMed

    Yang, Jiachen; Lin, Yancong; Gao, Zhiqun; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The human visual system (HVS) plays an important role in stereo image quality perception. Therefore, it has aroused many people's interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF) and weighted multi-scale (MS-SSIM). Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels. PMID:26717412

  7. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE PAGESBeta

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    2016-02-28

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  8. Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features.

    PubMed

    Xue, Wufeng; Mou, Xuanqin; Zhang, Lei; Bovik, Alan C; Feng, Xiangchu

    2014-11-01

    Blind image quality assessment (BIQA) aims to evaluate the perceptual quality of a distorted image without information regarding its reference image. Existing BIQA models usually predict the image quality by analyzing the image statistics in some transformed domain, e.g., in the discrete cosine transform domain or wavelet domain. Though great progress has been made in recent years, BIQA is still a very challenging task due to the lack of a reference image. Considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we propose a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian (LOG) response. We employ an adaptive procedure to jointly normalize the GM and LOG features, and show that the joint statistics of normalized GM and LOG features have desirable properties for the BIQA task. The proposed model is extensively evaluated on three large-scale benchmark databases, and shown to deliver highly competitive performance with state-of-the-art BIQA models, as well as with some well-known full reference image quality assessment models. PMID:25216482

  9. MTF as a quality measure for compressed images transmitted over computer networks

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Stern, Adrian; Huber, Merav; Huber, Revital

    1999-12-01

    One result of the recent advances in different components of imaging systems technology is that, these systems have become more resolution-limited and less noise-limited. The most useful tool utilized in characterization of resolution- limited systems is the Modulation Transfer Function (MTF). The goal of this work is to use the MTF as an image quality measure of image compression implemented by the JPEG (Joint Photographic Expert Group) algorithm and transmitted MPEG (Motion Picture Expert Group) compressed video stream through a lossy packet network. Although we realize that the MTF is not an ideal parameter with which to measure image quality after compression and transmission due to the non- linearity shift invariant process, we examine the conditions under which it can be used as an approximated criterion for image quality. The advantage in using the MTF of the compression algorithm is that it can be easily combined with the overall MTF of the imaging system.

  10. The study on the image quality of varied line spacing plane grating by computer simulation

    NASA Astrophysics Data System (ADS)

    Sun, Shouqiang; Zhang, Weiping; Liu, Lei; Yang, Qingyi

    2014-11-01

    Varied line spacing plane gratings have the features of self-focusing , aberration-reduced and easy manufacturing ,which are widely applied in synchrotron radiation, plasma physics and space astronomy, and other fields. In the study of diffracting imaging , the optical path function is expanded into maclaurin series, aberrations are expressed by the coefficient of series, most of the aberration coefficients are similar and the category is more, can't directly reflects image quality in whole. The paper will study on diffraction imaging of the varied line spacing plane gratings by using computer simulation technology, for a method judging the image quality visibly. In this paper, light beam from some object points on the same object plane are analyzed and simulated by ray trace method , the evaluation function is set up, which can fully scale the image quality. In addition, based on the evaluation function, the best image plane is found by search algorithm .

  11. Near-infrared hyperspectral imaging for quality analysis of agricultural and food products

    NASA Astrophysics Data System (ADS)

    Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.

    2010-04-01

    Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.

  12. Application of wavelets to the evaluation of phantom images for mammography quality control

    NASA Astrophysics Data System (ADS)

    Alvarez, M.; Pina, D. R.; Miranda, J. R. A.; Duarte, S. B.

    2012-11-01

    The main goal of this work was to develop a methodology for the computed analysis of American College of Radiology (ACR) mammographic phantom images, to be used in a quality control (QC) program of mammographic services. Discrete wavelet transform processing was applied to enhance the quality of images from the ACR mammographic phantom and to allow a lower dose for automatic evaluations of equipment performance in a QC program. Regions of interest (ROIs) containing phantom test objects (e.g., masses, fibers and specks) were focalized for appropriate wavelet processing, which highlighted the characteristics of structures present in each ROI. To minimize false-positive detection, each ROI in the image was submitted to pattern recognition tests, which identified structural details of the focalized test objects. Geometric and morphologic parameters of the processed test object images were used to quantify the final level of image quality. The final purpose of this work was to establish the main computational procedures for algorithms of quality evaluation of ACR phantom images. These procedures were implemented, and satisfactory agreement was obtained when the algorithm scores for image quality were compared with the results of assessments by three experienced radiologists. An exploratory study of a potential dose reduction was performed based on the radiologist scores and on the algorithm evaluation of images treated by wavelet processing. The results were comparable with both methods, although the algorithm had a tendency to provide a lower dose reduction than the evaluation by observers. Nevertheless, the objective and more precise criteria used by the algorithm to score image quality gave the computational result a higher degree of confidence. The developed algorithm demonstrates the potential use of the wavelet image processing approach for objectively evaluating the mammographic image quality level in routine QC tests. The implemented computational procedures

  13. Quality Assessment of Mapping Building Textures from Infrared Image Sequences

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Iwaszczuk, D.; Stilla, U.

    2012-07-01

    Generation and texturing of building models is a fast developing field of research. Several techniques have been developed to extract building geometry and textures from multiple images and image sequences. In this paper, these techniques are discussed and extended to automatically add new textures from infrared (IR) image sequences to existing building models. In contrast to existing work, geometry and textures are not generated together from the same dataset but the textures are extracted from the image sequence and matched to an existing geo-referenced 3D building model. The texture generation is divided in two main parts. The first part deals with the estimation and refinement of the exterior camera orientation. Feature points are extracted in the images and used as tie points in the sequence. A recorded exterior orientation of the camera s added to these homologous points and a bundle adjustment is performed starting on image pairs and combining the hole sequence. A given 3d model of the observed building is additionally added to introduce further constraint as ground control points in the bundle adjustment. The second part includes the extraction of textures from the images and the combination of textures from different images of the sequence. Using the reconstructed exterior camera orientation for every image of the sequence, the visible facades are projected into the image and texture is extracted. These textures normally contain only parts of the facade. The partial textures extracted from all images are combined to one facade texture. This texture is stored with a 3D reference to the corresponding facade. This allows searching for features in textures and localising those features in 3D space. It will be shown, that the proposed strategy allows texture extraction and mapping even for big building complexes with restricted viewing possibilities and for images with low optical resolution.

  14. Comparison of retinal image quality with spherical and customized aspheric intraocular lenses

    PubMed Central

    Guo, Huanqing; Goncharov, Alexander V.; Dainty, Chris

    2012-01-01

    We hypothesize that an intraocular lens (IOL) with higher-order aspheric surfaces customized for an individual eye provides improved retinal image quality, despite the misalignments that accompany cataract surgery. To test this hypothesis, ray-tracing eye models were used to investigate 10 designs of mono-focal single lens IOLs with rotationally symmetric spherical, aspheric, and customized surfaces. Retinal image quality of pseudo-phakic eyes using these IOLs together with individual variations in ocular and IOL parameters, are evaluated using a Monte Carlo analysis. We conclude that customized lenses should give improved retinal image quality despite the random errors resulting from IOL insertion. PMID:22574257

  15. Scientific assessment of the quality of OSIRIS images

    NASA Astrophysics Data System (ADS)

    Tubiana, C.; Güttler, C.; Kovacs, G.; Bertini, I.; Bodewits, D.; Fornasier, S.; Lara, L.; La Forgia, F.; Magrin, S.; Pajola, M.; Sierks, H.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Agarwal, J.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Boudreault, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; De Cecco, M.; El-Maarry, M. R.; Fulle, M.; Groussin, O.; Gutiérrez-Marques, P.; Gutiérrez, P. J.; Hoekzema, N.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lazzarin, M.; Lopez Moreno, J. J.; Marzari, F.; Massironi, M.; Michalik, H.; Moissl, R.; Naletto, G.; Oklay, N.; Scholten, F.; Shi, X.; Thomas, N.; Vincent, J.-B.

    2015-11-01

    Context. OSIRIS, the scientific imaging system onboard the ESA Rosetta spacecraft, has been imaging the nucleus of comet 67P/Churyumov-Gerasimenko and its dust and gas environment since March 2014. The images serve different scientific goals, from morphology and composition studies of the nucleus surface, to the motion and trajectories of dust grains, the general structure of the dust coma, the morphology and intensity of jets, gas distribution, mass loss, and dust and gas production rates. Aims: We present the calibration of the raw images taken by OSIRIS and address the accuracy that we can expect in our scientific results based on the accuracy of the calibration steps that we have performed. Methods: We describe the pipeline that has been developed to automatically calibrate the OSIRIS images. Through a series of steps, radiometrically calibrated and distortion corrected images are produced and can be used for scientific studies. Calibration campaigns were run on the ground before launch and throughout the years in flight to determine the parameters that are used to calibrate the images and to verify their evolution with time. We describe how these parameters were determined and we address their accuracy. Results: We provide a guideline to the level of trust that can be put into the various studies performed with OSIRIS images, based on the accuracy of the image calibration.

  16. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  17. Testing the quality of images for permanent magnet desktop MRI systems using specially designed phantoms

    NASA Astrophysics Data System (ADS)

    Qiu, Jianfeng; Wang, Guozhu; Min, Jiao; Wang, Xiaoyan; Wang, Pengcheng

    2013-12-01

    Our aim was to measure the performance of desktop magnetic resonance imaging (MRI) systems using specially designed phantoms, by testing imaging parameters and analysing the imaging quality. We designed multifunction phantoms with diameters of 18 and 60 mm for desktop MRI scanners in accordance with the American Association of Physicists in Medicine (AAPM) report no. 28. We scanned the phantoms with three permanent magnet 0.5 T desktop MRI systems, measured the MRI image parameters, and analysed imaging quality by comparing the data with the AAPM criteria and Chinese national standards. Image parameters included: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, signal-to-noise ratio (SNR), and image uniformity. The image parameters of three desktop MRI machines could be measured using our specially designed phantoms, and most parameters were in line with MRI quality control criterion, including: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, image uniformity and slice position accuracy. However, SNR was significantly lower than in some references. The imaging test and quality control are necessary for desktop MRI systems, and should be performed with the applicable phantom and corresponding standards.

  18. The subjective image quality of direct digital and conventional panoramic radiography.

    PubMed

    Gijbels, F; De Meyer, A M; Bou Serhal, C; Van den Bossche, C; Declerck, J; Persoons, M; Jacobs, R

    2000-09-01

    One of the main advantages of digital imaging is the possibility of altering display options for improved image interpretation. The aim of the present study was to evaluate the subjective image quality of direct digital panoramic images and compare the results with those obtained from conventional images. Furthermore, the effect of various filter settings on image interpretation was assessed. Panoramic images were obtained with three different types of panoramic equipment (one direct digital and two conventional units) from three groups of 54 patients with a natural dentition in all quadrants. The first series of panoramic images consisted of 54 unprocessed digital images; conventional film images (n = 108) comprised the second and third series. A final series consisted of the digital images treated with three different filters ("smoothening," "sharpening," and "contrast enhancement"). All images were scored randomly by four experts in oral radiology on a 4-point rating scale. The results showed a statistically significant difference in scorings between the conventional and digital panoramic units. The main reason for poor image quality appeared to be a combination of blurring and overlapping in the panoramic image. The premolar region in the upper jaw was the region where most additional radiographs were needed. PMID:11000322

  19. ANALYZING WATER QUALITY WITH IMAGES ACQUIRED FROM AIRBORNE SENSORS

    EPA Science Inventory

    Monitoring different parameters of water quality can be a time consuming and expensive activity. However, the use of airborne light-sensitive (optical) instruments may enhance the abilities of resource managers to monitor water quality in rivers in a timely and cost-effective ma...

  20. From image quality to atmosphere experience: how evolutions in technology impact experience assessment

    NASA Astrophysics Data System (ADS)

    Heynderickx, Ingrid; de Ridder, Huib

    2013-03-01

    Image quality is a concept that for long very well served to optimize display performance and signal quality. New technological developments, however, forced the community to look into higher-level concepts to capture the full experience. Terms as naturalness and viewing experience were used to optimize the full experience of 3D-displays and Ambilight TV. These higher-level concepts capture differences in image quality and differences in perceived depth or perceived viewing field. With the introduction of solid-state lighting, further enhancing the multimedia experience, yet more advanced quality evaluation concepts to optimize the overall experience will be needed in the future.

  1. A Hyperspectral Imaging System for Quality Detection of Pickles

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A hyperspectral imaging system in simultaneous reflectance (400-675 nm) and transmittance (675-1000 nm) modes was developed for detection of hollow or bloater damage on whole pickles. Hyperspectral reflectance and transmittance images were acquired from normal and bloated whole pickle samples collec...

  2. Image quality and breast dose of 24 screen-film combinations for mammography.

    PubMed

    Dimakopoulou, A D; Tsalafoutas, I A; Georgiou, E K; Yakoumakis, E N

    2006-02-01

    In this study the effect of different mammographic screen-film combinations on image quality and breast dose, and the correlation between the various image quality parameters, breast dose and the sensitometric parameters of a film were investigated. Three Agfa (MR5-II, HDR, HT), two Kodak (Min-R M, Min-R 2000), one Fuji (AD-M), one Konica (CM-H) and one Ferrania (HM plus) single emulsion mammographic films were combined with three intensifying screens (Agfa HDS, Kodak Min-R 2190 and Fuji AD-MA). The film characteristics were determined by sensitometry, while the image quality and the dose to the breast of the resulting 24 screen-film combinations were assessed using a mammography quality control phantom. For each combination, three images of the phantom were acquired with optical density within three different ranges. Two observers assessed the quality of the 72 phantom images obtained, while the breast dose was calculated from the exposure data required for each image. Large differences among screen-film combinations in terms of image quality and breast dose were identified however, that, could not be correlated with the film's sensitometric characteristics. All films presented the best resolution when combined with the HDS screen at the expense of speed, and the largest speed when combined with the AD-MA screen, without degradation of the overall image quality. However, an ideal screen-film combination presenting the best image quality with the least dose was not identified. It is also worth mentioning that the best performance for a film was not necessarily obtained when this was combined with the screen provided by the same manufacturer. The results of this study clearly demonstrate that comparison of films based on their sensitometric characteristics are of limited value for clinical practice, as their performance is strongly affected by the screens with which they are combined. PMID:16489193

  3. Metric-based no-reference quality assessment of heterogeneous document images

    NASA Astrophysics Data System (ADS)

    Nayef, Nibal; Ogier, Jean-Marc

    2015-01-01

    No-reference image quality assessment (NR-IQA) aims at computing an image quality score that best correlates with either human perceived image quality or an objective quality measure, without any prior knowledge of reference images. Although learning-based NR-IQA methods have achieved the best state-of-the-art results so far, those methods perform well only on the datasets on which they were trained. The datasets usually contain homogeneous documents, whereas in reality, document images come from different sources. It is unrealistic to collect training samples of images from every possible capturing device and every document type. Hence, we argue that a metric-based IQA method is more suitable for heterogeneous documents. We propose a NR-IQA method with the objective quality measure of OCR accuracy. The method combines distortion-specific quality metrics. The final quality score is calculated taking into account the proportions of, and the dependency among different distortions. Experimental results show that the method achieves competitive results with learning-based NR-IQA methods on standard datasets, and performs better on heterogeneous documents.

  4. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    SciTech Connect

    Morrow, Natalya V.; Lawton, Colleen A.; Qi, X. Sharon; Li, X. Allen

    2012-04-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  5. Temporal subtraction in chest radiography: Mutual information as a measure of image quality

    SciTech Connect

    Armato, Samuel G. III; Sensakovic, William F.; Passen, Samantha J.; Engelmann, Roger; MacMahon, Heber

    2009-12-15

    Purpose: Temporal subtraction is used to detect the interval change in chest radiographs and aid radiologists in patient diagnosis. This method registers two temporally different images by geometrically warping the lung region, or ''lung mask,'' of a previous radiographic image to align with the current image. The gray levels of every pixel in the current image are subtracted from the gray levels of the corresponding pixels in the warped previous image to form a temporal subtraction image. While temporal subtraction images effectively enhance areas of pathologic change, misregistration of the images can mislead radiologists by obscuring the interval change or by creating artifacts that mimic change. The purpose of this study was to investigate the utility of mutual information computed between two registered radiographic chest images as a metric for distinguishing between clinically acceptable and clinically unacceptable temporal subtraction images.Methods: A radiologist subjectively rated the image quality of 138 temporal subtraction images using a 1 (poor) to 5 (excellent) scale. To objectively assess the registration accuracy depicted in the temporal subtraction images, which is the main factor that affects the quality of these images, mutual information was computed on the two constituent registered images prior to their subtraction to generate a temporal subtraction image. Mutual information measures the joint entropy of the current image and the warped previous image, yielding a higher value when the gray levels of spatially matched pixels in each image are consistent. Mutual information values were correlated with the radiologist's subjective ratings. To improve this correlation, mutual information was computed from a spatially limited lung mask, which was cropped from the bottom by 10%-60%. Additionally, the number of gray-level values used in the joint entropy histogram was varied. The ability of mutual information to predict the clinical acceptability of

  6. Toward a Blind Deep Quality Evaluator for Stereoscopic Images Based on Monocular and Binocular Interactions.

    PubMed

    Shao, Feng; Tian, Weijun; Lin, Weisi; Jiang, Gangyi; Dai, Qionghai

    2016-05-01

    During recent years, blind image quality assessment (BIQA) has been intensively studied with different machine learning tools. Existing BIQA metrics, however, do not design for stereoscopic images. We believe this problem can be resolved by separating 3D images and capturing the essential attributes of images via deep neural network. In this paper, we propose a blind deep quality evaluator (DQE) for stereoscopic images (denoted by 3D-DQE) based on monocular and binocular interactions. The key technical steps in the proposed 3D-DQE are to train two separate 2D deep neural networks (2D-DNNs) from 2D monocular images and cyclopean images to model the process of monocular and binocular quality predictions, and combine the measured 2D monocular and cyclopean quality scores using different weighting schemes. Experimental results on four public 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment. PMID:26960225

  7. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  8. High Quality Color Imaging on the Mead Microencapsulated Imaging System Using a Fiber Optic CRT

    NASA Astrophysics Data System (ADS)

    Duke, Ronald J.

    1989-07-01

    Mead Imaging's unique microencapsulated color imaging system (CYCOLOR) has many applications. Mead Imaging and Hughes have combined CYCOLOR and Fiber Optic Cathode Ray Tubes (FOCRT) to develop digital color printers.

  9. Do SE(II) electrons really degrade SEM image quality?

    PubMed

    Bernstein, Gary H; Carter, Andrew D; Joy, David C

    2013-01-01

    Generally, in scanning electron microscopy (SEM) imaging, it is desirable that a high-resolution image be composed mainly of those secondary electrons (SEs) generated by the primary electron beam, denoted SE(I) . However, in conventional SEM imaging, other, often unwanted, signal components consisting of backscattered electrons (BSEs), and their associated SEs, denoted SE(II) , are present; these signal components contribute a random background signal that degrades contrast, and therefore signal-to-noise ratio and resolution. Ideally, the highest resolution SEM image would consist only of the SE(I) component. In SEMs that use conventional pinhole lenses and their associated Everhart-Thornley detectors, the image is composed of several components, including SE(I) , SE(II) , and some BSE, depending on the geometry of the detector. Modern snorkel lens systems eliminate the BSEs, but not the SE(II) s. We present a microfabricated diaphragm for minimizing the unwanted SE(II) signal components. We present evidence of improved imaging using a microlithographically generated pattern of Au, about 500 nm thick, that blocks most of the undesired signal components, leaving an image composed mostly of SE(I) s. We refer to this structure as a "spatial backscatter diaphragm." PMID:22589040

  10. Nondestructive spectroscopic and imaging techniques for quality evaluation and assessment of fish and fish products.

    PubMed

    He, Hong-Ju; Wu, Di; Sun, Da-Wen

    2015-01-01

    Nowadays, people have increasingly realized the importance of acquiring high quality and nutritional values of fish and fish products in their daily diet. Quality evaluation and assessment are always expected and conducted by using rapid and nondestructive methods in order to satisfy both producers and consumers. During the past two decades, spectroscopic and imaging techniques have been developed to nondestructively estimate and measure quality attributes of fish and fish products. Among these noninvasive methods, visible/near-infrared (VIS/NIR) spectroscopy, computer/machine vision, and hyperspectral imaging have been regarded as powerful and effective analytical tools for fish quality analysis and control. VIS/NIR spectroscopy has been widely applied to determine intrinsic quality characteristics of fish samples, such as moisture, protein, fat, and salt. Computer/machine vision on the other hand mainly focuses on the estimation of external features like color, weight, size, and surface defects. Recently, by incorporating both spectroscopy and imaging techniques in one system, hyperspectral imaging cannot only measure the contents of different quality attributes simultaneously, but also obtain the spatial distribution of such attributes when the quality of fish samples are evaluated and measured. This paper systematically reviews the research advances of these three nondestructive optical techniques in the application of fish quality evaluation and determination and discuss future trends in the developments of nondestructive technologies for further quality characterization in fish and fish products. PMID:24915393

  11. Is a vegetarian diet adequate for children.

    PubMed

    Hackett, A; Nathan, I; Burgess, L

    1998-01-01

    The number of people who avoid eating meat is growing, especially among young people. Benefits to health from a vegetarian diet have been reported in adults but it is not clear to what extent these benefits are due to diet or to other aspects of lifestyles. In children concern has been expressed concerning the adequacy of vegetarian diets especially with regard to growth. The risks/benefits seem to be related to the degree of restriction of he diet; anaemia is probably both the main and the most serious risk but this also applies to omnivores. Vegan diets are more likely to be associated with malnutrition, especially if the diets are the result of authoritarian dogma. Overall, lacto-ovo-vegetarian children consume diets closer to recommendations than omnivores and their pre-pubertal growth is at least as good. The simplest strategy when becoming vegetarian may involve reliance on vegetarian convenience foods which are not necessarily superior in nutritional composition. The vegetarian sector of the food industry could do more to produce foods closer to recommendations. Vegetarian diets can be, but are not necessarily, adequate for children, providing vigilance is maintained, particularly to ensure variety. Identical comments apply to omnivorous diets. Three threats to the diet of children are too much reliance on convenience foods, lack of variety and lack of exercise. PMID:9670174

  12. TU-F-9A-01: Balancing Image Quality and Dose in Radiography

    SciTech Connect

    Peck, D; Pasciak, A

    2014-06-15

    Emphasis is often placed on minimizing radiation dose in diagnostic imaging without a complete consideration of the effect on image quality, especially those that affect diagnostic accuracy. This session will include a patient image-based review of diagnostic quantities important to radiologists in conventional radiography, including the effects of body habitus, age, positioning, and the clinical indication of the exam. The relationships between image quality, radiation dose, and radiation risk will be discussed, specifically addressing how these factors are affected by image protocols and acquisition parameters and techniques. This session will also discuss some of the actual and perceived radiation risk associated with diagnostic imaging. Regardless if the probability for radiation-induced cancer is small, the fear associated with radiation persists. Also when a risk has a benefit to an individual or to society, the risk may be justified with respect to the benefit. But how do you convey the risks and the benefits to people? This requires knowledge of how people perceive risk and how to communicate the risk and the benefit to different populations. In this presentation the sources of errors in estimating risk from radiation and some methods used to convey risks are reviewed. Learning Objectives: Understand the image quality metrics that are clinically relevant to radiologists. Understand how acquisition parameters and techniques affect image quality and radiation dose in conventional radiology. Understand the uncertainties in estimates of radiation risk from imaging exams. Learn some methods for effectively communicating radiation risk to the public.

  13. Predicted image quality of a CMOS APS X-ray detector across a range of mammographic beam qualities

    NASA Astrophysics Data System (ADS)

    Konstantinidis, A.

    2015-09-01

    Digital X-ray detectors based on Complementary Metal-Oxide- Semiconductor (CMOS) Active Pixel Sensor (APS) technology have been introduced in the early 2000s in medical imaging applications. In a previous study the X-ray performance (i.e. presampling Modulation Transfer Function (pMTF), Normalized Noise Power Spectrum (NNPS), Signal-to-Noise Ratio (SNR) and Detective Quantum Efficiency (DQE)) of the Dexela 2923MAM CMOS APS X-ray detector was evaluated within the mammographic energy range using monochromatic synchrotron radiation (i.e. 17-35 keV). In this study image simulation was used to predict how the mammographic beam quality affects image quality. In particular, the experimentally measured monochromatic pMTF, NNPS and SNR parameters were combined with various mammographic spectral shapes (i.e. Molybdenum/Molybdenum (Mo/Mo), Rhodium/Rhodium (Rh/Rh), Tungsten/Aluminium (W/Al) and Tungsten/Rhodium (W/Rh) anode/filtration combinations at 28 kV). The image quality was measured in terms of Contrast-to-Noise Ratio (CNR) using a synthetic breast phantom (4 cm thick with 50% glandularity). The results can be used to optimize the imaging conditions in order to minimize patient's Mean Glandular Dose (MGD).

  14. Full-reference quality assessment of stereoscopic images by learning sparse monocular and binocular features

    NASA Astrophysics Data System (ADS)

    Li, Kemeng; Shao, Feng; Jiang, Gangyi; Yu, Mei

    2014-11-01

    Perceptual stereoscopic image quality assessment (SIQA) aims to use computational models to measure the image quality in consistent with human visual perception. In this research, we try to simulate monocular and binocular visual perception, and proposed a monocular-binocular feature fidelity (MBFF) induced index for SIQA. To be more specific, in the training stage, we learn monocular and binocular dictionaries from the training database, so that the latent response properties can be represented as a set of basis vectors. In the quality estimation stage, we compute monocular feature fidelity (MFF) and binocular feature fidelity (BFF) indexes based on the estimated sparse coefficient vectors, and compute global energy response similarity (GERS) index by considering energy changes. The final quality score is obtained by incorporating them together. Experimental results on four public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment.

  15. Color image quality assessment with biologically inspired feature and machine learning

    NASA Astrophysics Data System (ADS)

    Deng, Cheng; Tao, Dacheng

    2010-07-01

    In this paper, we present a new no-reference quality assessment metric for color images by using biologically inspired features (BIFs) and machine learning. In this metric, we first adopt a biologically inspired model to mimic the visual cortex and represent a color image based on BIFs which unifies color units, intensity units and C1 units. Then, in order to reduce the complexity and benefit the classification, the high dimensional features are projected to a low dimensional representation with manifold learning. Finally, a multiclass classification process is performed on this new low dimensional representation of the image and the quality assessment is based on the learned classification result in order to respect the one of the human observers. Instead of computing a final note, our method classifies the quality according to the quality scale recommended by the ITU. The preliminary results show that the developed metric can achieve good quality evaluation performance.

  16. PLEIADES-HR 1A&1B image quality commissioning: innovative geometric calibration methods and results

    NASA Astrophysics Data System (ADS)

    Greslou, Daniel; de Lussy, Françoise; Amberg, Virginie; Dechoz, Cécile; Lenoir, Florie; Delvit, Jean-Marc; Lebègue, Laurent

    2013-09-01

    PLEIADES earth observing system consists of two satellites designed to provide optical 70cm resolution images to civilian and defense users. The first Pleiades satellite 1A was launched on December 2011 while the second satellite Pleiades 1B was placed on orbit, one year after, on December 2012. The calibration operations and the assessment of the image of the two satellites have been performed by CNES Image Quality team during the called commissioning phase which took place after each launch and lasted each time less than 6 months. The geometric commissioning activities consist in assessing and improving the geometric quality of the images in order to meet very demanding requirements. This paper deals with the means used and methods applied, mainly the innovative ones, in order to manage these activities. It describes both their accuracy and their operational interest. Finally it gives the main results for geometric image quality performances of the PHR system.

  17. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality.

    PubMed

    Bittel, Amy M; Nickerson, Andrew; Saldivar, Isaac S; Dolman, Nick J; Nan, Xiaolin; Gibbs, Summer L

    2016-01-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance. PMID:27412307

  18. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality

    PubMed Central

    Bittel, Amy M.; Nickerson, Andrew; Saldivar, Isaac S.; Dolman, Nick J.; Nan, Xiaolin; Gibbs, Summer L.

    2016-01-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance. PMID:27412307

  19. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality

    NASA Astrophysics Data System (ADS)

    Bittel, Amy M.; Nickerson, Andrew; Saldivar, Isaac S.; Dolman, Nick J.; Nan, Xiaolin; Gibbs, Summer L.

    2016-07-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic characterization. Herein, we validated polyvinyl alcohol (PVA) as a single-molecule environment to efficiently quantify the photoswitching properties of fluorophores and identified photoswitching properties predictive of quality SMLM images. We demonstrated that the same fluorophore photoswitching properties measured in PVA films and using antibody adsorption, a protein-conjugation environment analogous to labeled cells, were significantly correlated to microtubule width and continuity, surrogate measures of SMLM image quality. Defining PVA as a fluorophore photoswitching screening platform will facilitate SMLM fluorophore development and optimal image buffer assessment through facile and accurate photoswitching property characterization, which translates to SMLM fluorophore imaging performance.

  20. Image quality improvement in megavoltage cone beam CT using an imaging beam line and a sintered pixelated array system

    SciTech Connect

    Breitbach, Elizabeth K.; Maltz, Jonathan S.; Gangadharan, Bijumon; Bani-Hashemi, Ali; Anderson, Carryn M.; Bhatia, Sudershan K.; Stiles, Jared; Edwards, Drake S.; Flynn, Ryan T.

    2011-11-15

    Purpose: To quantify the improvement in megavoltage cone beam computed tomography (MVCBCT) image quality enabled by the combination of a 4.2 MV imaging beam line (IBL) with a carbon electron target and a detector system equipped with a novel sintered pixelated array (SPA) of translucent Gd{sub 2}O{sub 2}S ceramic scintillator. Clinical MVCBCT images are traditionally acquired with the same 6 MV treatment beam line (TBL) that is used for cancer treatment, a standard amorphous Si (a-Si) flat panel imager, and the Kodak Lanex Fast-B (LFB) scintillator. The IBL produces a greater fluence of keV-range photons than the TBL, to which the detector response is more optimal, and the SPA is a more efficient scintillator than the LFB. Methods: A prototype IBL + SPA system was installed on a Siemens Oncor linear accelerator equipped with the MVision{sup TM} image guided radiation therapy (IGRT) system. A SPA strip consisting of four neighboring tiles and measuring 40 cm by 10.96 cm in the crossplane and inplane directions, respectively, was installed in the flat panel imager. Head- and pelvis-sized phantom images were acquired at doses ranging from 3 to 60 cGy with three MVCBCT configurations: TBL + LFB, IBL + LFB, and IBL + SPA. Phantom image quality at each dose was quantified using the contrast-to-noise ratio (CNR) and modulation transfer function (MTF) metrics. Head and neck, thoracic, and pelvic (prostate) cancer patients were imaged with the three imaging system configurations at multiple doses ranging from 3 to 15 cGy. The systems were assessed qualitatively from the patient image data. Results: For head and neck and pelvis-sized phantom images, imaging doses of 3 cGy or greater, and relative electron densities of 1.09 and 1.48, the CNR average improvement factors for imaging system change of TBL + LFB to IBL + LFB, IBL + LFB to IBL + SPA, and TBL + LFB to IBL + SPA were 1.63 (p < 10{sup -8}), 1.64 (p < 10{sup -13}), 2.66 (p < 10{sup -9}), respectively. For all imaging

  1. LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites

    NASA Technical Reports Server (NTRS)

    Wukelic, G. E. (Principal Investigator)

    1983-01-01

    No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.

  2. Subjective image quality comparison between two digital dental radiographic systems and conventional dental film

    PubMed Central

    Ajmal, Muhammed; Elshinawy, Mohamed I.

    2014-01-01

    Objectives Digital radiography has become an integral part of dentistry. Digital radiography does not require film or dark rooms, reduces X-ray doses, and instantly generates images. The aim of our study was to compare the subjective image quality of two digital dental radiographic systems with conventional dental film. Materials & methods A direct digital (DD) ‘Digital’ system by Sirona, a semi-direct (SD) digital system by Vista-scan, and Kodak ‘E’ speed dental X-ray films were selected for the study. Endodontically-treated extracted teeth (n = 25) were used in the study. Details of enamel, dentin, dentino-enamel junction, root canal filling (gutta percha), and simulated apical pathology were investigated with the three radiographic systems. The data were subjected to statistical analyzes to reveal differences in subjective image quality. Results Conventional dental X-ray film was superior to the digital systems. For digital systems, DD imaging was superior to SD imaging. Conclusion Conventional film yielded superior image quality that was statistically significant in almost all aspects of comparison. Conventional film was followed in image quality by DD, and SD provided the lowest quality images. Conventional film is still considered the gold standard to diagnose diseases affecting the jawbone. Recommendations Improved software and hardware for digital imaging systems are now available and these improvements may now yield images that are comparable in quality to conventional film. However, we recommend that studies still use more observers and other statistical methods to produce ideal results. PMID:25382946

  3. A learning-based approach for automated quality assessment of computer-rendered images

    NASA Astrophysics Data System (ADS)

    Zhang, Xi; Agam, Gady

    2012-01-01

    Computer generated images are common in numerous computer graphics applications such as games, modeling, and simulation. There is normally a tradeoff between the time allocated to the generation of each image frame and and the quality of the image, where better quality images require more processing time. Specifically, in the rendering of 3D objects, the surfaces of objects may be manipulated by subdividing them into smaller triangular patches and/or smoothing them so as to produce better looking renderings. Since unnecessary subdivision results in increased rendering time and unnecessary smoothing results in reduced details, there is a need to automatically determine the amount of necessary processing for producing good quality rendered images. In this paper we propose a novel supervised learning based methodology for automatically predicting the quality of rendered images of 3D objects. To perform the prediction we train on a data set which is labeled by human observers for quality. We are then able to predict the quality of renderings (not used in the training) with an average prediction error of roughly 20%. The proposed approach is compared to known techniques and is shown to produce better results.

  4. Digitization and metric conversion for image quality test targets: Part II

    NASA Astrophysics Data System (ADS)

    Kress, William C.

    2003-12-01

    A common need of the INCITS W1.1 Macro Uniformity, Color Rendition and Micro Uniformity ad hoc efforts is to digitize image quality test targets and derive parameters that correlate with image quality assessments. The digitized data should be in a colorimetric color space such as CIELAB and the process of digitizing will introduce no spatial artifacts that reduce the accuracy of image quality parameters. Input digitizers come in many forms including inexpensive scanners used in the home, a range of sophisticated scanners used for graphic arts and scanners used for scientific and industrial measurements (e.g., microdensitometers). Some of these are capable of digitizing hard copy output for image quality objective metrices, and this report focuses on assessment of high quality flatbed scanners for that role. Digitization using flatbed scanners is attractive because they are relatively inexpensive, easy to use, and most are available with document feeders permitting analysis of a stack of documents with little user interaction. Other authors have addressed using scanners for image quality measurements. This paper focuses (1) on color transformations from RGB to CIELAB and (2) sampling issues and demonstrates that flatbed scanners can have a high level of accuracy for generating accurate, stable images in the CIELAB metric. Previous discussion and experimental results focusing on color conversions had been presented at PICS 2003. This paper reviews the past discussion with some refinement based on recent experiments and extends the analysis into color accuracy verification and sampling issues.

  5. Image Quality Assessment Based on Inter-Patch and Intra-Patch Similarity

    PubMed Central

    Zhou, Fei; Lu, Zongqing; Wang, Can; Sun, Wen; Xia, Shu-Tao; Liao, Qingmin

    2015-01-01

    In this paper, we propose a full-reference (FR) image quality assessment (IQA) scheme, which evaluates image fidelity from two aspects: the inter-patch similarity and the intra-patch similarity. The scheme is performed in a patch-wise fashion so that a quality map can be obtained. On one hand, we investigate the disparity between one image patch and its adjacent ones. This disparity is visually described by an inter-patch feature, where the hybrid effect of luminance masking and contrast masking is taken into account. The inter-patch similarity is further measured by modifying the normalized correlation coefficient (NCC). On the other hand, we also attach importance to the impact of image contents within one patch on the IQA problem. For the intra-patch feature, we consider image curvature as an important complement of image gradient. According to local image contents, the intra-patch similarity is measured by adaptively comparing image curvature and gradient. Besides, a nonlinear integration of the inter-patch and intra-patch similarity is presented to obtain an overall score of image quality. The experiments conducted on six publicly available image databases show that our scheme achieves better performance in comparison with several state-of-the-art schemes. PMID:25793282

  6. Image and Diagnosis Quality of X-Ray Image Transmission via Cell Phone Camera: A Project Study Evaluating Quality and Reliability

    PubMed Central

    Heck, Andreas; Hadizadeh, Dariusch R.; Weber, Oliver; Gräff, Ingo; Burger, Christof; Montag, Mareen; Koerfer, Felix; Kabir, Koroush

    2012-01-01

    Introduction Developments in telemedicine have not produced any relevant benefits for orthopedics and trauma surgery to date. For the present project study, several parameters were examined during assessment of x-ray images, which had been photographed and transmitted via cell phone. Materials and Methods A total of 100 x-ray images of various body regions were photographed with a Nokia cell phone and transmitted via email or MMS. Next, the transmitted photographs were reviewed on a laptop computer by five medical specialists and assessed regarding quality and diagnosis. Results Due to their poor quality, the transmitted MMS images could not be evaluated and this path of transmission was therefore excluded. Mean size of transmitted x-ray email images was 394 kB (range: 265–590 kB, SD ±59), average transmission time was 3.29 min ±8 (CI 95%: 1.7–4.9). Applying a score from 1–10 (very poor - excellent), mean image quality was 5.8. In 83.2±4% (mean value ± SD) of cases (median 82; 80–89%), there was agreement between final diagnosis and assessment by the five medical experts who had received the images. However, there was a markedly low concurrence ratio in the thoracic area and in pediatric injuries. Discussion While the rate of accurate diagnosis and indication for surgery was high with a concurrence ratio of 83%, considerable differences existed between the assessed regions, with lowest values for thoracic images. Teleradiology is a cost-effective, rapid method which can be applied wherever wireless cell phone reception is available. In our opinion, this method is in principle suitable for clinical use, enabling the physician on duty to agree on appropriate measures with colleagues located elsewhere via x-ray image transmission on a cell phone. PMID:23082108

  7. No-reference image quality assessment based on log-derivative statistics of natural scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chandler, Damon M.

    2013-10-01

    We propose an efficient blind/no-reference image quality assessment algorithm using a log-derivative statistical model of natural scenes. Our method, called DErivative Statistics-based QUality Evaluator (DESIQUE), extracts image quality-related statistical features at two image scales in both the spatial and frequency domains. In the spatial domain, normalized pixel values of an image are modeled in two ways: pointwise-based statistics for single pixel values and pairwise-based log-derivative statistics for the relationship of pixel pairs. In the frequency domain, log-Gabor filters are used to extract the fine scales of the image, which are also modeled by the log-derivative statistics. All of these statistics can be fitted by a generalized Gaussian distribution model, and the estimated parameters are fed into combined frameworks to estimate image quality. We train our models on the LIVE database by using optimized support vector machine learning. Experiment results tested on other databases show that the proposed algorithm not only yields a substantial improvement in predictive performance as compared to other state-of-the-art no-reference image quality assessment methods, but also maintains a high computational efficiency.

  8. Medical imaging using ionizing radiation: Optimization of dose and image quality in fluoroscopy

    SciTech Connect

    Jones, A. Kyle; Balter, Stephen; Rauch, Phillip; Wagner, Louis K.

    2014-01-15

    The 2012 Summer School of the American Association of Physicists in Medicine (AAPM) focused on optimization of the use of ionizing radiation in medical imaging. Day 2 of the Summer School was devoted to fluoroscopy and interventional radiology and featured seven lectures. These lectures have been distilled into a single review paper covering equipment specification and siting, equipment acceptance testing and quality control, fluoroscope configuration, radiation effects, dose estimation and measurement, and principles of flat panel computed tomography. This review focuses on modern fluoroscopic equipment and is comprised in large part of information not found in textbooks on the subject. While this review does discuss technical aspects of modern fluoroscopic equipment, it focuses mainly on the clinical use and support of such equipment, from initial installation through estimation of patient dose and management of radiation effects. This review will be of interest to those learning about fluoroscopy, to those wishing to update their knowledge of modern fluoroscopic equipment, to those wishing to deepen their knowledge of particular topics, such as flat panel computed tomography, and to those who support fluoroscopic equipment in the clinic.

  9. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  10. Dose reduction and image quality optimizations in CT of pediatric and adult patients: phantom studies

    NASA Astrophysics Data System (ADS)

    Jeon, P.-H.; Lee, C.-L.; Kim, D.-H.; Lee, Y.-J.; Jeon, S.-S.; Kim, H.-J.

    2014-03-01

    Multi-detector computed tomography (MDCT) can be used to easily and rapidly perform numerous acquisitions, possibly leading to a marked increase in the radiation dose to individual patients. Technical options dedicated to automatically adjusting the acquisition parameters according to the patient's size are of specific interest in pediatric radiology. A constant tube potential reduction can be achieved for adults and children, while maintaining a constant detector energy fluence. To evaluate radiation dose, the weighted CT dose index (CTDIw) was calculated based on the CT dose index (CTDI) measured using an ion chamber, and image noise and image contrast were measured from a scanned image to evaluate image quality. The dose-weighted contrast-to-noise ratio (CNRD) was calculated from the radiation dose, image noise, and image contrast measured from a scanned image. The noise derivative (ND) is a quality index for dose efficiency. X-ray spectra with tube voltages ranging from 80 to 140 kVp were used to compute the average photon energy. Image contrast and the corresponding contrast-to-noise ratio (CNR) were determined for lesions of soft tissue, muscle, bone, and iodine relative to a uniform water background, as the iodine contrast increases at lower energy (i.e., k-edge of iodine is 33 keV closer to the beam energy) using mixed water-iodine contrast normalization (water 0, iodine 25, 100, 200, and 1000 HU, respectively). The proposed values correspond to high quality images and can be reduced if only high-contrast organs are assessed. The potential benefit of lowering the tube voltage is an improved CNRD, resulting in a lower radiation dose and optimization of image quality. Adjusting the tube potential in abdominal CT would be useful in current pediatric radiography, where the choice of X-ray techniques generally takes into account the size of the patient as well as the need to balance the conflicting requirements of diagnostic image quality and radiation dose

  11. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    SciTech Connect

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl; Flohr, Thomas

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phase (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum

  12. The use of modern electronic flat panel devices for image guided radiation therapy:. Image quality comparison, intra fraction motion monitoring and quality assurance applications

    NASA Astrophysics Data System (ADS)

    Nill, S.; Stützel, J.; Häring, P.; Oelfke, U.

    2008-06-01

    With modern radiotherapy delivery techniques like intensity modulated radiotherapy (IMRT) it is possible to delivery a more conformal dose distribution to the tumor while better sparing the organs at risk (OAR) compared to 3D conventional radiation therapy. Due to the theoretically high dose conformity achievable it is very important to know the exact position of the target volume during the treatment. With more and more modern linear accelerators equipped with imaging devices this is now almost possible. These imaging devices are using energies between 120kV and 6MV and therefore different detector systems are used but the vast majority is using amorphous silicon flat panel devices with different scintilator screens and build up materials. The technical details and the image quality of these systems are discussed and first results of the comparison are presented. In addition new methods to deal with motion management and quality assurance procedures are shortly discussed.

  13. Improved quality of intrafraction kilovoltage images by triggered readout of unexposed frames

    SciTech Connect

    Poulsen, Per Rugaard; Jonassen, Johnny; Jensen, Carsten; Schmidt, Mai Lykkegaard

    2015-11-15

    Purpose: The gantry-mounted kilovoltage (kV) imager of modern linear accelerators can be used for real-time tumor localization during radiation treatment delivery. However, the kV image quality often suffers from cross-scatter from the megavoltage (MV) treatment beam. This study investigates readout of unexposed kV frames as a means to improve the kV image quality in a series of experiments and a theoretical model of the observed image quality improvements. Methods: A series of fluoroscopic images were acquired of a solid water phantom with an embedded gold marker and an air cavity with and without simultaneous radiation of the phantom with a 6 MV beam delivered perpendicular to the kV beam with 300 and 600 monitor units per minute (MU/min). An in-house built device triggered readout of zero, one, or multiple unexposed frames between the kV exposures. The unexposed frames contained part of the MV scatter, consequently reducing the amount of MV scatter accumulated in the exposed frames. The image quality with and without unexposed frame readout was quantified as the contrast-to-noise ratio (CNR) of the gold marker and air cavity for a range of imaging frequencies from 1 to 15 Hz. To gain more insight into the observed CNR changes, the image lag of the kV imager was measured and used as input in a simple model that describes the CNR with unexposed frame readout in terms of the contrast, kV noise, and MV noise measured without readout of unexposed frames. Results: Without readout of unexposed kV frames, the quality of intratreatment kV images decreased dramatically with reduced kV frequencies due to MV scatter. The gold marker was only visible for imaging frequencies ≥3 Hz at 300 MU/min and ≥5 Hz for 600 MU/min. Visibility of the air cavity required even higher imaging frequencies. Readout of multiple unexposed frames ensured visibility of both structures at all imaging frequencies and a CNR that was independent of the kV frame rate. The image lag was 12.2%, 2

  14. Blind noisy image quality evaluation using a deformable ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Li; Huang, Xiaotong; Tian, Jing; Fu, Xiaowei

    2014-04-01

    The objective of blind noisy image quality assessment is to evaluate the quality of the degraded noisy image without the knowledge of the ground truth image. Its performance relies on the accuracy of the noise statistics estimated from homogenous blocks. The major challenge of block-based approaches lies in the block size selection, as it affects the local noise derivation. To tackle this challenge, a deformable ant colony optimization (DACO) approach is proposed in this paper to adaptively adjust the ant size for image block selection. The proposed DACO approach considers that the size of the ant is adjustable during foraging. For the smooth image blocks, more pheromone is deposited, and then the size of ant is increased. Therefore, this strategy enables the ants to have dynamic food-search capability, leading to more accurate selection of homogeneous blocks. Furthermore, the regression analysis is used to obtain image quality score by exploiting the above-estimated noise statistics. Experimental results are provided to justify that the proposed approach outperforms conventional approaches to provide more accurate noise statistics estimation and achieve a consistent image quality evaluation performance for both the artificially generated and real-world noisy images.

  15. Rapid Assessment of Tablet Film Coating Quality by Multispectral UV Imaging.

    PubMed

    Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Rehder, Soenke; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S

    2016-08-01

    Chemical imaging techniques are beneficial for control of tablet coating layer quality as they provide spectral and spatial information and allow characterization of various types of coating defects. The purpose of this study was to assess the applicability of multispectral UV imaging for assessment of the coating layer quality of tablets. UV images were used to detect, characterize, and localize coating layer defects such as chipped parts, inhomogeneities, and cracks, as well as to evaluate the coating surface texture. Acetylsalicylic acid tablets were prepared on a rotary tablet press and coated with a polyvinyl alcohol-polyethylene glycol graft copolymer using a pan coater. It was demonstrated that the coating intactness can be assessed accurately and fast by UV imaging. The different types of coating defects could be differentiated and localized based on multivariate image analysis and Soft Independent Modeling by Class Analogy applied to the UV images. Tablets with inhomogeneous texture of the coating could be identified and distinguished from those with a homogeneous surface texture. Consequently, UV imaging was shown to be well-suited for monitoring of the tablet coating layer quality. UV imaging is a promising technique for fast quality control of the tablet coating because of the high data acquisition speed and its nondestructive analytical nature. PMID:26729525

  16. Image quality assessment in panoramic dental radiography: a comparative study between conventional and digital systems

    PubMed Central

    Tiau, Yu Jin

    2013-01-01

    This study is designed to compare and evaluate the diagnostic image quality of dental panoramic radiography between conventional and digital systems. Fifty-four panoramic images were collected and divided into three groups consisting of conventional, digital with and without post processing image. Each image was printed out and scored subjectively by two experienced dentists who were blinded to the exposure parameters and system protocols. The evaluation covers of anatomical coverage and structures, density and image contrast. The overall image quality score revealed that digital panoramic with post-processing scored the highest of 3.45±0.19, followed by digital panoramic system without post-processing and conventional panoramic system with corresponding scores of 3.33±0.33 and 2.06±0.40. In conclusion, images produced by digital panoramic system are better in diagnostic image quality than that from conventional panoramic system. Digital post-processing visualization can improve diagnostic quality significantly in terms of radiographic density and contrast. PMID:23483085

  17. Effect of masking phase-only holograms on the quality of reconstructed images.

    PubMed

    Deng, Yuanbo; Chu, Daping

    2016-04-20

    A phase-only hologram modulates the phase of the incident light and diffracts it efficiently with low energy loss because of the minimum absorption. Much research attention has been focused on how to generate phase-only holograms, and little work has been done to understand the effect and limitation of their partial implementation, possibly due to physical defects and constraints, in particular as in the practical situations where a phase-only hologram is confined or needs to be sliced or tiled. The present study simulates the effect of masking phase-only holograms on the quality of reconstructed images in three different scenarios with different filling factors, filling positions, and illumination intensity profiles. Quantitative analysis confirms that the width of the image point spread function becomes wider and the image quality decreases, as expected, when the filling factor decreases, and the image quality remains the same for different filling positions as well. The width of the image point spread function as derived from different filling factors shows a consistent behavior to that as measured directly from the reconstructed image, especially as the filling factor becomes small. Finally, mask profiles of different shapes and intensity distributions are shown to have more complicated effects on the image point spread function, which in turn affects the quality and textures of the reconstructed image. PMID:27140082

  18. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  19. Image-quality assessment of monochrome monitors for medical soft copy display

    NASA Astrophysics Data System (ADS)

    Weibrecht, Martin; Spekowius, Gerhard; Quadflieg, Peter; Blume, Hartwig R.

    1997-05-01

    Soft-copy presentation of medical images is becoming part of the medical routine as more and more health care facilities are converted to digital filmless hospital and radiological information management. To provide optimal image quality, display systems must be incorporated when assessing the overall system image quality. We developed a method to accomplish this. The proper working of the method is demonstrated with the analysis of four different monochrome monitors. We determined display functions and veiling glare with a high-performance photometer. Structure mottle of the CRT screens, point spread functions and images of stochastic structures were acquired by a scientific CCD camera. The images were analyzed with respect to signal transfer characteristics and noise power spectra. We determined the influence of the monitors on the detective quantum efficiency of a simulated digital x-ray imaging system. The method follows a physical approach; nevertheless, the results of the analysis are in good agreement with the subjective impression of human observers.

  20. Pleiades image quality: from users' needs to products definition

    NASA Astrophysics Data System (ADS)

    Kubik, Philippe; Pascal, Véronique; Latry, Christophe; Baillarin, Simon

    2005-10-01

    Pleiades is the highest resolution civilian earth observing system ever developed in Europe. This imagery programme is conducted by the French National Space Agency, CNES. It will operate in 2008-2009 two agile satellites designed to provide optical images to civilian and defence users. Images will be simultaneously acquired in Panchromatic (PA) and multispectral (XS) mode, which allows, in Nadir acquisition condition, to deliver 20 km wide, false or natural colored scenes with a 70 cm ground sampling distance after PA+XS fusion. Imaging capabilities have been highly optimized in order to acquire along-track mosaics, stereo pairs and triplets, and multi-targets. To fulfill the operational requirements and ensure quick access to information, ground processing has to automatically perform the radiometrical and geometrical corrections. Since ground processing capabilities have been taken into account very early in the programme development, it has been possible to relax some costly on-board components requirements, in order to achieve a cost effective on-board/ground compromise. Starting from an overview of the system characteristics, this paper deals with the image products definition (raw level, perfect sensor, orthoimage and along-track orthomosaics), and the main processing steps. It shows how each system performance is a result of the satellite performance followed by an appropriate ground processing. Finally, it focuses on the radiometrical performances of final products which are intimately linked to the following processing steps : radiometrical corrections, PA restoration, image resampling and PAN-sharpening.

  1. Enhancement of the low resolution image quality using randomly sampled data for multi-slice MR imaging

    PubMed Central

    Pang, Yong; Yu, Baiying

    2014-01-01

    Low resolution images are often acquired in in vivo MR applications involving in large field-of-view (FOV) and high speed imaging, such as, whole-body MRI screening and functional MRI applications. In this work, we investigate a multi-slice imaging strategy for acquiring low resolution images by using compressed sensing (CS) MRI to enhance the image quality without increasing the acquisition time. In this strategy, low resolution images of all the slices are acquired using multiple-slice imaging sequence. In addition, extra randomly sampled data in one center slice are acquired by using the CS strategy. These additional randomly sampled data are multiplied by the weighting functions generated from low resolution full k-space images of the two slices, and then interpolated into the k-space of other slices. In vivo MR images of human brain were employed to investigate the feasibility and the performance of the proposed method. Quantitative comparison between the conventional low resolution images and those from the proposed method was also performed to demonstrate the advantage of the method. PMID:24834426

  2. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    SciTech Connect

    Vaishnav, J. Y. Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  3. Beam quality measurements using digitized laser beam images

    SciTech Connect

    Duncan, M.D. ); Mahon, R. )

    1989-11-01

    A method is described for measuring various laser beam characteristics with modest experimental complexity by digital processing of the near and far field images. Gaussian spot sizes, peak intensities, and spatial distributions of the images are easily found. Far field beam focusability is determined by computationally applying apertures of circular of elliptical diameters to the digitized image. Visualization of the magnitude of phase and intensity distortions is accomplished by comparing the 2-D fast Fourier transform of both smoothed and unsmoothed near field data to the actual far field data. The digital processing may be performed on current personal computers to give the experimenter unprecedented capabilities for rapid beam characteriztion at relatively low cost.

  4. TH-A-16A-01: Image Quality for the Radiation Oncology Physicist: Review of the Fundamentals and Implementation

    SciTech Connect

    Seibert, J; Imbergamo, P

    2014-06-15

    The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, high contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.

  5. Are existing procedures enough? Image and video quality assessment: review of subjective and objective metrics

    NASA Astrophysics Data System (ADS)

    Ouni, Sonia; Chambah, Majed; Herbin, Michel; Zagrouba, Ezzeddine

    2008-01-01

    Images and videos are subject to a wide variety of distortions during acquisition, digitizing, processing, restoration, compression, storage, transmission and reproduction, any of which may result in degradation in visual quality. That is why image quality assessment plays a major role in many image processing applications. Image and video quality metrics can be classified by using a number of criteria such as the type of the application domain, the predicted distortion (noise, blur, etc.) and the type of information needed to assess the quality (original image, distorted image, etc.). In the literature, the most reliable way of assessing the quality of an image or of a video is subjective evaluation [1], because human beings are the ultimate receivers in most applications. The subjective quality metric, obtained from a number of human observers, has been regarded for many years as the most reliable form of quality measurement. However, this approach is too cumbersome, slow and expensive for most applications [2]. So, in recent years a great effort has been made towards the development of quantitative measures. The objective quality evaluation is automated, done in real time and needs no user interaction. But ideally, such a quality assessment system would perceive and measure image or video impairments just like a human being [3]. The quality assessment is so important and is still an active and evolving research topic because it is a central issue in the design, implementation, and performance testing of all systems [4, 5]. Usually, the relevant literature and the related work present only a state of the art of metrics that are limited to a specific application domain. The major goal of this paper is to present a wider state of the art of the most used metrics in several application domains such as compression [6], restoration [7], etc. In this paper, we review the basic concepts and methods in subjective and objective image/video quality assessment research and

  6. A cross-platform survey of CT image quality and dose from routine abdomen protocols and a method to systematically standardize image quality

    NASA Astrophysics Data System (ADS)

    Favazza, Christopher P.; Duan, Xinhui; Zhang, Yi; Yu, Lifeng; Leng, Shuai; Kofler, James M.; Bruesewitz, Michael R.; McCollough, Cynthia H.

    2015-11-01

    Through this investigation we developed a methodology to evaluate and standardize CT image quality from routine abdomen protocols across different manufacturers and models. The influence of manufacturer-specific automated exposure control systems on image quality was directly assessed to standardize performance across a range of patient sizes. We evaluated 16 CT scanners across our health system, including Siemens, GE, and Toshiba models. Using each practice’s routine abdomen protocol, we measured spatial resolution, image noise, and scanner radiation output (CTDIvol). Axial and in-plane spatial resolutions were assessed through slice sensitivity profile (SSP) and modulation transfer function (MTF) measurements, respectively. Image noise and CTDIvol values were obtained for three different phantom sizes. SSP measurements demonstrated a bimodal distribution in slice widths: an average of 6.2  ±  0.2 mm using GE’s ‘Plus’ mode reconstruction setting and 5.0  ±  0.1 mm for all other scanners. MTF curves were similar for all scanners. Average spatial frequencies at 50%, 10%, and 2% MTF values were 3.24  ±  0.37, 6.20  ±  0.34, and 7.84  ±  0.70 lp cm-1, respectively. For all phantom sizes, image noise and CTDIvol varied considerably: 6.5-13.3 HU (noise) and 4.8-13.3 mGy (CTDIvol) for the smallest phantom; 9.1-18.4 HU and 9.3-28.8 mGy for the medium phantom; and 7.8-23.4 HU and 16.0-48.1 mGy for the largest phantom. Using these measurements and benchmark SSP, MTF, and image noise targets, CT image quality can be standardized across a range of patient sizes.

  7. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    PubMed

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. PMID:26894596

  8. Image forgery detection by means of no-reference quality metrics

    NASA Astrophysics Data System (ADS)

    Battisti, F.; Carli, M.; Neri, A.

    2012-03-01

    In this paper a methodology for digital image forgery detection by means of an unconventional use of image quality assessment is addressed. In particular, the presence of differences in quality degradations impairing the images is adopted to reveal the mixture of different source patches. The ratio behind this work is in the hypothesis that any image may be affected by artifacts, visible or not, caused by the processing steps: acquisition (i.e., lens distortion, acquisition sensors imperfections, analog to digital conversion, single sensor to color pattern interpolation), processing (i.e., quantization, storing, jpeg compression, sharpening, deblurring, enhancement), and rendering (i.e., image decoding, color/size adjustment). These defects are generally spatially localized and their strength strictly depends on the content. For these reasons they can be considered as a fingerprint of each digital image. The proposed approach relies on a combination of image quality assessment systems. The adopted no-reference metric does not require any information about the original image, thus allowing an efficient and stand-alone blind system for image forgery detection. The experimental results show the effectiveness of the proposed scheme.

  9. Influence of partial k-space filling on the quality of magnetic resonance images*

    PubMed Central

    Jornada, Tiago da Silva; Murata, Camila Hitomi; Medeiros, Regina Bitelli

    2016-01-01

    Objective To study the influence that the scan percentage tool used in partial k-space acquisition has on the quality of images obtained with magnetic resonance imaging equipment. Materials and Methods A Philips 1.5 T magnetic resonance imaging scanner was used in order to obtain phantom images for quality control tests and images of the knee of an adult male. Results There were no significant variations in the uniformity and signal-to-noise ratios with the phantom images. However, analysis of the high-contrast spatial resolution revealed significant degradation when scan percentages of 70% and 85% were used in the acquisition of T1- and T2-weighted images, respectively. There was significant degradation when a scan percentage of 25% was used in T1- and T2-weighted in vivo images (p ≤ 0.01 for both). Conclusion The use of tools that limit the k-space is not recommended without knowledge of their effect on image quality. PMID:27403015

  10. Image quality evaluation of iterative CT reconstruction algorithms: a perspective from spatial domain noise texture measures

    NASA Astrophysics Data System (ADS)

    Pachon, Jan H.; Yadava, Girijesh; Pal, Debashish; Hsieh, Jiang

    2012-03-01

    Non-linear iterative reconstruction (IR) algorithms have shown promising improvements in image quality at reduced dose levels. However, IR images sometimes may be perceived as having different image noise texture than traditional filtered back projection (FBP) reconstruction. Standard linear-systems-based image quality evaluation metrics are limited in characterizing such textural differences and non-linear image-quality vs. dose trade-off behavior, hence limited in predicting potential impact of such texture differences in diagnostic task. In an attempt to objectively characterize and measure dose dependent image noise texture and statistical properties of IR and FBP images, we have investigated higher order moments and Haralicks Gray Level Co-occurrence Matrices (GLCM) based texture features on phantom images reconstructed by an iterative and a traditional FBP method. In this study, the first 4 central order moments, and multiple texture features from Haralick GLCM in 4 directions at 6 different ROI sizes and four dose levels were computed. For resolution, noise and texture trade-off analysis, spatial frequency domain NPS and contrastdependent MTF were also computed. Preliminary results of the study indicate that higher order moments, along with spatial domain measures of energy, contrast, correlation, homogeneity, and entropy consistently capture the textural differences between FBP and IR as dose changes. These metrics may be useful in describing the perceptual differences in randomness, coarseness, contrast, and smoothness of images reconstructed by non-linear algorithms.

  11. Quality assessment of remote sensing image fusion using feature-based fourth-order correlation coefficient

    NASA Astrophysics Data System (ADS)

    Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing

    2016-04-01

    In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.

  12. A comparison of defect size and film quality obtained from Film digitized image and digital image radiographs

    NASA Astrophysics Data System (ADS)

    Kamlangkeng, Poramate; Asa, Prateepasen; Mai, Noipitak

    2014-06-01

    Digital radiographic testing is an acceptable premature nondestructive examination technique. Its performance and limitation comparing to the old technique are still not widely well known. In this paper conducted the study on the comparison of the accuracy of the defect size measurement and film quality obtained from film and digital radiograph techniques by testing in specimens and known size sample defect. Initially, one specimen was built with three types of internal defect; which are longitudinal cracking, lack of fusion, and porosity. For the known size sample defect, it was machined various geometrical size for comparing the accuracy of the measuring defect size to the real size in both film and digital images. To compare the image quality by considering at smallest detectable wire and the three defect images. In this research used Image Quality Indicator (IQI) of wire type 10/16 FE EN BS EN-462-1-1994. The radiographic films were produced by X-ray and gamma ray using Kodak AA400 size 3.5x8 inches, while the digital images were produced by Fuji image plate type ST-VI with 100 micrometers resolution. During the tests, a radiator GE model MF3 was implemented. The applied energy is varied from 120 to 220 kV and the current from 1.2 to 3.0 mA. The intensity of Iridium 192 gamma ray is in the range of 24-25 Curie. Under the mentioned conditions, the results showed that the deviation of the defect size measurement comparing to the real size obtained from the digital image radiographs is below than that of the film digitized, whereas the quality of film digitizer radiographs is higher in comparison.

  13. Occupational and patient exposure as well as image quality for full spine examinations with the EOS imaging system

    SciTech Connect

    Damet, J. Fournier, P.; Monnin, P.; Sans-Merce, M.; Verdun, F. R.; Baechler, S.; Ceroni, D.; Zand, T.

    2014-06-15

    Purpose: EOS (EOS imaging S.A, Paris, France) is an x-ray imaging system that uses slot-scanning technology in order to optimize the trade-off between image quality and dose. The goal of this study was to characterize the EOS system in terms of occupational exposure, organ doses to patients as well as image quality for full spine examinations. Methods: Occupational exposure was determined by measuring the ambient dose equivalents in the radiological room during a standard full spine examination. The patient dosimetry was performed using anthropomorphic phantoms representing an adolescent and a five-year-old child. The organ doses were measured with thermoluminescent detectors and then used to calculate effective doses. Patient exposure with EOS was then compared to dose levels reported for conventional radiological systems. Image quality was assessed in terms of spatial resolution and different noise contributions to evaluate the detector's performances of the system. The spatial-frequency signal transfer efficiency of the imaging system was quantified by the detective quantum efficiency (DQE). Results: The use of a protective apron when the medical staff or parents have to stand near to the cubicle in the radiological room is recommended. The estimated effective dose to patients undergoing a full spine examination with the EOS system was 290μSv for an adult and 200 μSv for a child. MTF and NPS are nonisotropic, with higher values in the scanning direction; they are in addition energy-dependent, but scanning speed independent. The system was shown to be quantum-limited, with a maximum DQE of 13%. The relevance of the DQE for slot-scanning system has been addressed. Conclusions: As a summary, the estimated effective dose was 290μSv for an adult; the image quality remains comparable to conventional systems.

  14. Application research on enhancing near-infrared micro-imaging quality by 2nd derivative

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Ma, Zhi-hong; Zhao, Liu; Wang, Bei-hong; Han, Ping; Pan, Li-gang; Wang, Ji-hua

    2013-08-01

    Near-infrared micro-imaging will not only provide the sample's spatial distribution information, but also the spectroscopic information of each pixel. In this thesis, it took the artificial sample of wheat flour and formaldehyde sodium sulfoxylate distribution given for example to research the data processing method for enhancing the quality of near-infrared micro-imaging. Near-infrared spectroscopic feature of wheat flour and formaldehyde sodium sulfoxylate being studied on, compare correlation imaging and 2nd derivative imaging were applied in the imaging processing of the near-infrared micro-image of the artificial sample. Furthermore, the two methods were combined, i.e. 2nd derivative compare correlation imaging was acquired. The result indicated that the difference of the correlation coefficients between the two substances, i.e. wheat flour and formaldehyde sodium sulfoxylate, and the reference spectrum has been increased from 0.001 in compare correlation image to 0.796 in 2nd derivative compare correlation image respectively, which enhances the imaging quality efficiently. This study will, to some extent, be of important reference significance to near-infrared micro-imaging method research of agricultural products and foods.

  15. Investigation into the impact of tone reproduction on the perceived image quality of fine art reproductions

    NASA Astrophysics Data System (ADS)

    Farnand, Susan; Jiang, Jun; Frey, Franziska

    2012-01-01

    A project, supported by the Andrew W. Mellon Foundation, evaluating current practices in fine art image reproduction, determining the image quality generally achievable, and establishing a suggested framework for art image interchange was recently completed. (Information regarding the Mellon project and related work may be found at www.artimaging.rit.edu.) To determine the image quality currently being achieved, experimentation was conducted in which a set of objective targets and pieces of artwork in various media were imaged by participating museums and other cultural heritage institutions. Prints and images for display made from the delivered image files at the Rochester Institute of Technology were used as stimuli in psychometric testing in which observers were asked to evaluate the prints as reproductions of the original artwork and as stand alone images. The results indicated that there were limited differences between assessments made with and without the original present for printed reproductions. For displayed images, the differences were more significant with lower contrast images being ranked lower and higher contrast images generally ranked higher when the original was not present. This was true for experiments conducted both in a dimly lit laboratory as well as via the web, indicating that more than viewing conditions were driving this shift.

  16. Development and measurement of the goodness of test images for visual print quality evaluation

    NASA Astrophysics Data System (ADS)

    Halonen, Raisa; Nuutinen, Mikko; Asikainen, Reijo; Oittinen, Pirkko

    2010-01-01

    The aim of the study was to develop a test image for print quality evaluation to improve the current state of the art in testing the quality of digital printing. The image presented by the authors in EI09 portrayed a breakfast scene, the content of which could roughly be divided in four object categories: a woman, a table with objects, a landscape picture and a gray wall. The image was considered to have four main areas of improvement: the busyness of the image, the control of the color world, the salience of the object categories, and the naturalness of the event and the setting. To improve the first image, another test image was developed. Whereas several aspects were improved, the shortcomings of the new image found by visual testing and self-report were in the same four areas. To combine the insights of the two test images and to avoid their pitfalls, a third image was developed. The goodness of the three test images was measured in subjective tests. The third test image was found to address efficiently three of the four improvement areas, only the salience of the objects left a bit to be desired.

  17. Damage and quality assessment in wheat by NIR hyperspectral imaging

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fusarium head blight is a fungal disease that affects the world's small grains, such as wheat and barley. Attacking the spikelets during development, the fungus causes a reduction of yield and grain of poorer processing quality. It also is a health concern because of the secondary metabolite, deoxyn...