Science.gov

Sample records for acceptable image quality

  1. Consumer acceptance and carcass quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In commodity production systems, beef quality is designated based on the USDA grading criteria which take into account carcass marbling, maturity and yield. Producers are rewarded economically for beef quality grade (QG) of Choice versus Select although the price difference (spread) varies seasonal...

  2. AIR CLEANING FOR ACCEPTABLE INDOOR AIR QUALITY

    EPA Science Inventory

    The paper discusses air cleaning for acceptable indoor air quality. ir cleaning has performed an important role in heating, ventilation, and air-conditioning systems for many years. raditionally, general ventilation air-filtration equipment has been used to protect cooling coils ...

  3. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  4. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  5. ASHRAE STANDARD 62: VENTILATION FOR ACCEPTABLE INDOOR AIR QUALITY

    EPA Science Inventory

    The paper highlights some of the key features of the design procedures in ASHRAE Standard 62 (Ventilation for Acceptable Indoor Air Quality) and summarizes the status of the related review process. he Standard contains design procedures and guidelines for ventilation rates in "al...

  6. Online Support Service Quality, Online Learning Acceptance, and Student Satisfaction

    ERIC Educational Resources Information Center

    Lee, Jung-Wan

    2010-01-01

    This paper examines potential differences between Korean and American students in terms of their perception levels regarding online education support service quality, online learning acceptance, and satisfaction. Eight hundred and seventy-two samples, which were collected from students in online classes in the United States and Korea, were…

  7. Light on Body Image Treatment: Acceptance Through Mindfulness

    ERIC Educational Resources Information Center

    Stewart, Tiffany M.

    2004-01-01

    The treatment of body image has to be multifaceted and should be directed toward the treatment of the whole individual - body, mind, and spirit - with an ultimate culmination of acceptance and compassion for the self. This article presents information on a mindful approach to the treatment of body image as it pertains to concerns with body size…

  8. Evaluation of image quality

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    This presentation outlines in viewgraph format a general approach to the evaluation of display system quality for aviation applications. This approach is based on the assumption that it is possible to develop a model of the display which captures most of the significant properties of the display. The display characteristics should include spatial and temporal resolution, intensity quantizing effects, spatial sampling, delays, etc. The model must be sufficiently well specified to permit generation of stimuli that simulate the output of the display system. The first step in the evaluation of display quality is an analysis of the tasks to be performed using the display. Thus, for example, if a display is used by a pilot during a final approach, the aesthetic aspects of the display may be less relevant than its dynamic characteristics. The opposite task requirements may apply to imaging systems used for displaying navigation charts. Thus, display quality is defined with regard to one or more tasks. Given a set of relevant tasks, there are many ways to approach display evaluation. The range of evaluation approaches includes visual inspection, rapid evaluation, part-task simulation, and full mission simulation. The work described is focused on two complementary approaches to rapid evaluation. The first approach is based on a model of the human visual system. A model of the human visual system is used to predict the performance of the selected tasks. The model-based evaluation approach permits very rapid and inexpensive evaluation of various design decisions. The second rapid evaluation approach employs specifically designed critical tests that embody many important characteristics of actual tasks. These are used in situations where a validated model is not available. These rapid evaluation tests are being implemented in a workstation environment.

  9. Toward clinically relevant standardization of image quality.

    PubMed

    Samei, Ehsan; Rowberg, Alan; Avraham, Ellie; Cornelius, Craig

    2004-12-01

    In recent years, notable progress has been made on standardization of medical image presentations in the definition and implementation of the Digital Imaging and Communications in Medicine (DICOM) Grayscale Standard Display Function (GSDF). In parallel, the American Association of Physicists in Medicine (AAPM) Task Group 18 has provided much needed guidelines and tools for visual and quantitative assessment of medical display quality. In spite of these advances, however, there are still notable gaps in the effectiveness of DICOM GSDF to assure consistent and high-quality display of medical images. In additions the degree of correlation between display technical data and diagnostic usability and performance of displays remains unclear. This article proposes three specific steps that DICOM, AAPM, and ACR may collectively take to bridge the gap between technical performance and clinical use: (1) DICOM does not provide means and acceptance criteria to evaluate the conformance of a display device to GSDF or to address other image quality characteristics. DICOM can expand beyond luminance response, extending the measurable, quantifiable elements of TG18 such as reflection and resolution. (2) In a large picture archiving and communication system (PACS) installation, it is critical to continually track the appropriate use and performance of multiple display devices. DICOM may help with this task by adding a Device Service Class to the standard to provide for communication and control of image quality parameters between applications and devices, (3) The question of clinical significance of image quality metrics has rarely been addressed by prior efforts. In cooperation with AAPM, the American College of Radiology (ACR), and the Society for Computer Applications in Radiology (SCAR), DICOM may help to initiate research that will determine the clinical consequence of variations in image quality metrics (eg, GSDF conformance) and to define what constitutes image quality from a

  10. Scanning technology selection impacts acceptability and usefulness of image-rich content*†

    PubMed Central

    Alpi, Kristine M.; Brown, James C.; Neel, Jennifer A.; Grindem, Carol B.; Linder, Keith E.; Harper, James B.

    2016-01-01

    Objective Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Methods Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Results Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). Conclusions PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities. PMID:26807048

  11. No training blind image quality assessment

    NASA Astrophysics Data System (ADS)

    Chu, Ying; Mou, Xuanqin; Ji, Zhen

    2014-03-01

    State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further predict the quality scores of the testing images. However, these methods need complicated training and learning, and the evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics without training and learning are firstly proposed. The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the dictionary. Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly acceptable.

  12. Determinants of Taste Preference and Acceptability: Quality vs. Hedonics

    PubMed Central

    Loney, Gregory C.; Blonde, Ginger D.; Eckel, Lisa A.; Spector, Alan C.

    2012-01-01

    Several methods exist for reliably determining the motivational valence of a taste stimulus in animals, but few to determine its perceptual quality independent of its apparent affective properties. Individual differences in taste preference and acceptability could result from variance in the perceptual qualities of the stimulus leading to different hedonic evaluations. Alternatively, taste perception might be identical across subjects whereas processing of the sensory signals in reward circuits could differ. Utilizing an operant-based taste cue discrimination/generalization task involving a gustometer, we trained male Long-Evans rats to report the degree to which a test stimulus resembled the taste quality of either sucrose or quinine irrespective of its intensity. The rats, grouped by a characteristic bimodal phenotypic difference in their preference for sucralose, treated this artificial sweetener as qualitatively different with the sucralose-preferring rats finding the stimulus much more perceptually similar to sucrose, relative to sucralose-avoiding rats. Although the possibility that stimulus palatability may have served as a discriminative cue cannot entirely be ruled out, the profile of results suggested otherwise. Subsequent brief-access licking tests revealed that affective licking responses of the same sucralose-avoiding and -preferring rats differed across concentration in a manner roughly similar to that found in the stimulus generalization task. Thus, the perceived taste quality of sucralose alone may be sufficient to drive the observed behavioral avoidance of the compound. By virtue of its potential ability to dissociate the sensory and motivational consequences of a given experimental manipulation on taste-related behavior, this approach could be interpretively valuable. PMID:22815522

  13. Determinants of taste preference and acceptability: quality versus hedonics.

    PubMed

    Loney, Gregory C; Blonde, Ginger D; Eckel, Lisa A; Spector, Alan C

    2012-07-18

    Several methods exist for reliably determining the motivational valence of a taste stimulus in animals, but few to determine its perceptual quality independent of its apparent affective properties. Individual differences in taste preference and acceptability could result from variance in the perceptual qualities of the stimulus leading to different hedonic evaluations. Alternatively, taste perception might be identical across subjects, but the processing of the sensory signals in reward circuits could differ. Using an operant-based taste cue discrimination/generalization task involving a gustometer, we trained male Long-Evans rats to report the degree to which a test stimulus resembled the taste quality of either sucrose or quinine regardless of its intensity. The rats, grouped by a characteristic bimodal phenotypic difference in their preference for sucralose, treated this artificial sweetener as qualitatively different-compared to sucralose-avoiding rats, the sucralose-preferring rats found the stimulus much more perceptually similar to sucrose. Although the possibility that stimulus palatability may have served as a discriminative cue cannot entirely be ruled out, the profile of results suggests otherwise. Subsequent brief-access licking tests revealed that affective licking responses of the same sucralose-avoiding and -preferring rats differed across concentration in a manner approximately similar to that found in the stimulus generalization task. Thus, the perceived taste quality of sucralose alone may be sufficient to drive the observed behavioral avoidance of the compound. By virtue of its potential ability to dissociate the sensory and motivational consequences of a given experimental manipulation on taste-related behavior, this approach could be interpretively valuable. PMID:22815522

  14. Image Enhancement, Image Quality, and Noise

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2005-01-01

    The Multiscale Retinex With Color Restoration (MSRCR) is a non-linear image enhancement algorithm that provides simultaneous dynamic range compression, color constancy and rendition. The overall impact is to brighten up areas of poor contrast/lightness but not at the expense of saturating areas of good contrast/brightness. The downside is that with the poor signal-to-noise ratio that most image acquisition devices have in dark regions, noise can also be greatly enhanced thus affecting overall image quality. In this paper, we will discuss the impact of the MSRCR on the overall quality of an enhanced image as a function of the strength of shadows in an image, and as a function of the root-mean-square (RMS) signal-to-noise (SNR) ratio of the image.

  15. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced

  16. Data Quality Objectives for WTP Feed Acceptance Criteria - 12043

    SciTech Connect

    Arakali, Aruna V.; Benson, Peter A.; Duncan, Garth; Johnston, Jill C.; Lane, Thomas A.; Matis, George; Olson, John W.; Banning, Davey L.; Greer, Daniel A.; Seidel, Cary M.; Thien, Michael G.

    2012-07-01

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is under construction for the U.S. Department of Energy by Bechtel National, Inc. and subcontractor URS Corporation (contract no. DE-AC27-01RV14136). The plant when completed will be the world's largest nuclear waste treatment facility. Bechtel and URS are tasked with designing, constructing, commissioning, and transitioning the plant to the long term operating contractor to process the legacy wastes that are stored in underground tanks (from nuclear weapons production between the 1940's and the 1980's). Approximately 56 million gallons of radioactive waste is currently stored in these tanks at the Hanford Site in southeastern Washington. There are three major WTP facilities being constructed for processing the tank waste feed. The Pretreatment (PT) facility receives feed where it is separated into a low activity waste (LAW) fraction and a high level waste (HLW) fraction. These fractions are transferred to the appropriate (HLW or LAW) facility, combined with glass former material, and sent to high temperature melters for formation of the glass product. In addition to PT, HLW and LAW, other facilities in WTP include the Laboratory (LAB) for analytical services and the Balance of Facilities (BOF) for plant maintenance, support and utility services. The transfer of staged feed from the waste storage tanks and acceptance in WTP receipt vessels require data for waste acceptance criteria (WAC) parameters from analysis of feed samples. The Data Quality Objectives (DQO) development was a joint team effort between WTP and Tank Operations Contractor (TOC) representatives. The focus of this DQO effort was to review WAC parameters and develop data quality requirements, the results of which will determine whether or not the staged feed can be transferred from the TOC to WTP receipt vessels. The approach involved systematic planning for data collection consistent with EPA guidance for the seven-step DQO process

  17. Quality assessment for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2014-11-01

    Image quality assessment is an essential value judgement approach for many applications. Multi & hyper spectral imaging has more judging essentials than grey scale or RGB imaging and its image quality assessment job has to cover up all-around evaluating factors. This paper presents an integrating spectral imaging quality assessment project, in which spectral-based, radiometric-based and spatial-based statistical behavior for three hyperspectral imagers are jointly executed. Spectral response function is worked out based on discrete illumination images and its spectral performance is deduced according to its FWHM and spectral excursion value. Radiometric response ability of different spectral channel under both on-ground and airborne imaging condition is judged by SNR computing based upon local RMS extraction and statistics method. Spatial response evaluation of the spectral imaging instrument is worked out by MTF computing with slanted edge analysis method. Reported pioneering systemic work in hyperspectral imaging quality assessment is carried out with the help of several domestic dominating work units, which not only has significance in the development of on-ground and in-orbit instrument performance evaluation technique but also takes on reference value for index demonstration and design optimization for instrument development.

  18. Foveated wavelet image quality index

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Bovik, Alan C.; Lu, Ligang; Kouloheris, Jack L.

    2001-12-01

    The human visual system (HVS) is highly non-uniform in sampling, coding, processing and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. Currently, most image quality measurement methods are designed for uniform resolution images. These methods do not correlate well with the perceived foveated image quality. Wavelet analysis delivers a convenient way to simultaneously examine localized spatial as well as frequency information. We developed a new image quality metric called foveated wavelet image quality index (FWQI) in the wavelet transform domain. FWQI considers multiple factors of the HVS, including the spatial variance of the contrast sensitivity function, the spatial variance of the local visual cut-off frequency, the variance of human visual sensitivity in different wavelet subbands, and the influence of the viewing distance on the display resolution and the HVS features. FWQI can be employed for foveated region of interest (ROI) image coding and quality enhancement. We show its effectiveness by using it as a guide for optimal bit assignment of an embedded foveated image coding system. The coding system demonstrates very good coding performance and scalability in terms of foveated objective as well as subjective quality measurement.

  19. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  20. Video and image quality

    NASA Astrophysics Data System (ADS)

    Aldridge, Jim

    1995-09-01

    This paper presents some of the results of a UK government research program into methods of improving the effectiveness of CCTV surveillance systems. The paper identifies the major components of video security systems and primary causes of unsatisfactory images. A method is outline for relating the picture detail limitations imposed by each system component on overall system performance. The paper also points out some possible difficulties arising from the use of emerging new technology.

  1. Quality assessment for spectral domain optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Liu, Shuang; Paranjape, Amit S.; Elmaanaoui, Badr; Dewelle, Jordan; Rylander, H. Grady, III; Markey, Mia K.; Milner, Thomas E.

    2009-02-01

    Retinal nerve fiber layer (RNFL) thickness, a measure of glaucoma progression, can be measured in images acquired by spectral domain optical coherence tomography (OCT). The accuracy of RNFL thickness estimation, however, is affected by the quality of the OCT images. In this paper, a new parameter, signal deviation (SD), which is based on the standard deviation of the intensities in OCT images, is introduced for objective assessment of OCT image quality. Two other objective assessment parameters, signal to noise ratio (SNR) and signal strength (SS), are also calculated for each OCT image. The results of the objective assessment are compared with subjective assessment. In the subjective assessment, one OCT expert graded the image quality according to a three-level scale (good, fair, and poor). The OCT B-scan images of the retina from six subjects are evaluated by both objective and subjective assessment. From the comparison, we demonstrate that the objective assessment successfully differentiates between the acceptable quality images (good and fair images) and poor quality OCT images as graded by OCT experts. We evaluate the performance of the objective assessment under different quality assessment parameters and demonstrate that SD is the best at distinguishing between fair and good quality images. The accuracy of RNFL thickness estimation is improved significantly after poor quality OCT images are rejected by automated objective assessment using the SD, SNR, and SS.

  2. Fovea based image quality assessment

    NASA Astrophysics Data System (ADS)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  3. Landsat image data quality studies

    NASA Technical Reports Server (NTRS)

    Schueler, C. F.; Salomonson, V. V.

    1985-01-01

    Preliminary results of the Landsat-4 Image Data Quality Analysis (LIDQA) program to characterize the data obtained using the Thematic Mapper (TM) instrument on board the Landsat-4 and Landsat-5 satellites are reported. TM design specifications were compared to the obtained data with respect to four criteria, including spatial resolution; geometric fidelity; information content; and image relativity to Multispectral Scanner (MSS) data. The overall performance of the TM was rated excellent despite minor instabilities and radiometric anomalies in the data. Spatial performance of the TM exceeded design specifications in terms of both image sharpness and geometric accuracy, and the image utility of the TM data was at least twice as high as MSS data. The separability of alfalfa and sugar beet fields in a TM image is demonstrated.

  4. Quantitative image quality evaluation for cardiac CT reconstructions

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.

    2016-03-01

    Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.

  5. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery

  6. Pasta Fortified with Potato Juice: Structure, Quality, and Consumer Acceptance.

    PubMed

    Kowalczewski, Przemysław; Lewandowicz, Grażyna; Makowska, Agnieszka; Knoll, Ismena; Błaszczak, Wioletta; Białas, Wojciech; Kubiak, Piotr

    2015-06-01

    The potential of potato juice in relieving gastrointestinal disorders has already been proven. Work continues on implementation of this active component into products that are widely consumed. In this article, results of an attempt to fortify pasta with potato juice are presented and discussed. Fortification is performed using fresh and dried juice. The influence of the addition on culinary properties of the final product, such as cooking weight and cooking loss, as well as microstructure, color, texture, and consumer acceptance were evaluated. It was found that potato juice can be used for fortification of pasta both in its fresh and dried forms, however the effects on different responses depend on the potato juice form used. The addition of potato juice influenced the color of the product reducing its lightness and shifting color balances from green to red, yellow color saturation was decreased as well. Changes in color were more significant in the case of fresh juice addition. The firmness and microstructure of pasta was also influenced. The surface microstructure of pasta containing fresh potato juice was different from that of the other 2 products being a likely explanation of the lower cooking loss observed in its case. In contrast, the consistency of dough was strengthened by addition of dried potato juice. Principal components analysis indicated that the color change had the most pronounced effect on consumer acceptance. Other physicochemical changes were slightly less significant. Nevertheless, sensory evaluation proved that functional pasta produced with fresh potato juice finds consumer acceptance comparable with that of classic pasta. PMID:25982048

  7. Professional Acceptance Of Electronic Images In Radiologic Practice

    NASA Astrophysics Data System (ADS)

    Gitlin, Joseph N.; Curtis, David J.; Kerlin, Barbara D.; Olmsted, William W.

    1983-05-01

    During the past four years, a large number of radiographic images have been interpreted in both film and video modes in an effort to determine the utility of digital/analogue systems in general practice. With the cooperation of the Department of Defense, the MITRE Corporation, and several university-based radiology departments, the Public Health Service has participated in laboratory experiments and a teleradiology field trial to meet this objective. During the field trial, 30 radiologists participated in the interpretation of more than 4,000 diagnostic x-ray examinations that were performed at distant clinics, digitized, and transmitted to a medical center for interpretation on video monitors. As part of the evaluation, all of the participating radiologists and the attending physicians at the clinics were queried regarding the teleradiology system, particularly with respect to the diagnostic quality of the electronic images. The original films for each of the 4,000 examinations were read independently, and the findings and impressions from each mode were compared to identify discrepancies. In addition, a sample of 530 cases was reviewed and interpreted by a consensus panel to measure the accuracy of findings and impressions of both film and video readings. The sample has been retained in an automated archive for future study at the National Center of Devices and Radiological Health facilities in Rockville, Maryland. The studies include a comparison of diagnostic findings and impressions from 1024 x 1024 matrices with those obtained from the 512 x 512 format used in the field trial. The archive also provides a database for determining the effect of data compression techniques on diagnostic interpretations and establishing the utility of image processing algorithms. The paper will include an analysis of the final results of the field trial and preliminary findings from the ongoing studies using the archive of cases at the National Center for Devices and Radiological

  8. The influence of noise on image quality in phase-diverse coherent diffraction imaging

    NASA Astrophysics Data System (ADS)

    Wittler, H. P. A.; van Riessen, G. A.; Jones, M. W. M.

    2016-02-01

    Phase-diverse coherent diffraction imaging provides a route to high sensitivity and resolution with low radiation dose. To take full advantage of this, the characteristics and tolerable limits of measurement noise for high quality images must be understood. In this work we show the artefacts that manifest in images recovered from simulated data with noise of various characteristics in the illumination and diffraction pattern. We explore the limits at which images of acceptable quality can be obtained and suggest qualitative guidelines that would allow for faster data acquisition and minimize radiation dose.

  9. Image quality assessment in the low quality regime

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2012-03-01

    Traditionally, image quality estimators have been designed and optimized to operate over the entire quality range of images in a database, from very low quality to visually lossless. However, if quality estimation is limited to a smaller quality range, their performances drop dramatically, and many image applications only operate over such a smaller range. This paper is concerned with one such range, the low-quality regime, which is defined as the interval of perceived quality scores where there exists a linear relationship between the perceived quality scores and the perceived utility scores and exists at the low-quality end of image databases. Using this definition, this paper describes a subjective experiment to determine the low-quality regime for databases of distorted images that include perceived quality scores but not perceived utility scores, such as CSIQ and LIVE. The performances of several image utility and quality estimators are evaluated in the low-quality regime, indicating that utility estimators can be successfully applied to estimate perceived quality in this regime. Omission of the lowestfrequency image content is shown to be crucial to the performances of both kinds of estimators. Additionally, this paper establishes an upper-bound for the performances of quality estimators in the LQR, using a family of quality estimators based on VIF. The resulting optimal quality estimator indicates that estimating quality in the low-quality regime is robust to exact frequency pooling weights, and that near-optimal performance can be achieved by a variety of estimators providing that they substantially emphasize the appropriate frequency content.

  10. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  11. Infrared image quality evaluation method without reference image

    NASA Astrophysics Data System (ADS)

    Yue, Song; Ren, Tingting; Wang, Chengsheng; Lei, Bo; Zhang, Zhijie

    2013-09-01

    Since infrared image quality depends on many factors such as optical performance and electrical noise of thermal imager, image quality evaluation becomes an important issue which can conduce to both image processing afterward and capability improving of thermal imager. There are two ways of infrared image quality evaluation, with or without reference image. For real-time thermal image, the method without reference image is preferred because it is difficult to get a standard image. Although there are various kinds of methods for evaluation, there is no general metric for image quality evaluation. This paper introduces a novel method to evaluate infrared image without reference image from five aspects: noise, clarity, information volume and levels, information in frequency domain and the capability of automatic target recognition. Generally, the basic image quality is obtained from the first four aspects, and the quality of target is acquired from the last aspect. The proposed method is tested on several infrared images captured by different thermal imagers. Calculate the indicators and compare with human vision results. The evaluation shows that this method successfully describes the characteristics of infrared image and the result is consistent with human vision system.

  12. Impacts of Mixing on Acceptable Indoor Air Quality in Homes

    SciTech Connect

    Sherman, Max H.; Walker, Iain I.

    2010-01-01

    Ventilation reduces occupant exposure to indoor contaminants by diluting or removing them. In a multi-zone environment such as a house, every zone will have different dilution rates and contaminant source strengths. The total ventilation rate is the most important factor in determining occupant exposure to given contaminant sources, but the zone-specific distribution of exhaust and supply air and the mixing of ventilation air can play significant roles. Different types of ventilation systems will provide different amounts of mixing depending on several factors such as air leakage, air distribution system, and contaminant source and occupant locations. Most U.S. and Canadian homes have central heating, ventilation, and air conditioning systems, which tend to mix the air; thus, the indoor air in different zones tends to be well mixed for significant fractions of the year. This article reports recent results of investigations to determine the impact of air mixing on exposures of residential occupants to prototypical contaminants of concern. We summarize existing literature and extend past analyses to determine the parameters than affect air mixing as well as the impacts of mixing on occupant exposure, and to draw conclusions that are relevant for standards development and for practitioners designing and installing home ventilation systems. The primary conclusion is that mixing will not substantially affect the mean indoor air quality across a broad population of occupants, homes, and ventilation systems, but it can reduce the number of occupants who are exposed to extreme pollutant levels. If the policy objective is to minimize the number of people exposed above a given pollutant threshold, some amount of mixing will be of net benefit even though it does not benefit average exposure. If the policy is to minimize exposure on average, then mixing air in homes is detrimental and should not be encouraged. We also conclude that most homes in the US have adequate mixing

  13. Evaluation of image quality in computed radiography based mammography systems

    NASA Astrophysics Data System (ADS)

    Singh, Abhinav; Bhwaria, Vipin; Valentino, Daniel J.

    2011-03-01

    Mammography is the most widely accepted procedure for the early detection of breast cancer and Computed Radiography (CR) is a cost-effective technology for digital mammography. We have demonstrated that CR mammography image quality is viable for Digital Mammography. The image quality of mammograms acquired using Computed Radiography technology was evaluated using the Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE). The measurements were made using a 28 kVp beam (RQA M-II) using 2 mm of Al as a filter and a target/filter combination of Mo/Mo. The acquired image bit depth was 16 bits and the pixel pitch for scanning was 50 microns. A Step-Wedge phantom (to measure the Contrast-to-noise ratio (CNR)) and the CDMAM 3.4 Contrast Detail phantom were also used to assess the image quality. The CNR values were observed at varying thickness of PMMA. The CDMAM 3.4 phantom results were plotted and compared to the EUREF acceptable and achievable values. The effect on image quality was measured using the physics metrics. A lower DQE was observed even with a higher MTF. This could be possibly due to a higher noise component present due to the way the scanner was configured. The CDMAM phantom scores demonstrated a contrast-detail comparable to the EUREF values. A cost-effective CR machine was optimized for high-resolution and high-contrast imaging.

  14. Testing scanners for the quality of output images

    NASA Astrophysics Data System (ADS)

    Concepcion, Vicente P.; Nadel, Lawrence D.; D'Amato, Donald P.

    1995-01-01

    Document scanning is the means through which documents are converted to their digital image representation for electronic storage or distribution. Among the types of documents being scanned by government agencies are tax forms, patent documents, office correspondence, mail pieces, engineering drawings, microfilm, archived historical papers, and fingerprint cards. Increasingly, the resulting digital images are used as the input for further automated processing including: conversion to a full-text-searchable representation via machine printed or handwritten (optical) character recognition (OCR), postal zone identification, raster-to-vector conversion, and fingerprint matching. These diverse document images may be bi-tonal, gray scale, or color. Spatial sampling frequencies range from about 200 pixels per inch to over 1,000. The quality of the digital images can have a major effect on the accuracy and speed of any subsequent automated processing, as well as on any human-based processing which may be required. During imaging system design, there is, therefore, a need to specify the criteria by which image quality will be judged and, prior to system acceptance, to measure the quality of images produced. Unfortunately, there are few, if any, agreed-upon techniques for measuring document image quality objectively. In the output images, it is difficult to distinguish image degradation caused by the poor quality of the input paper or microfilm from that caused by the scanning system. We propose several document image quality criteria and have developed techniques for their measurement. These criteria include spatial resolution, geometric image accuracy, (distortion), gray scale resolution and linearity, and temporal and spatial uniformity. The measurement of these criteria requires scanning one or more test targets along with computer-based analyses of the test target images.

  15. Effect of image quality on calcification detection in digital mammography

    SciTech Connect

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-06-15

    (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection.

  16. Effect of image quality on calcification detection in digital mammography

    PubMed Central

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. PMID:22755704

  17. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score. PMID:26513783

  18. An objective method for 3D quality prediction using visual annoyance and acceptability level

    NASA Astrophysics Data System (ADS)

    Khaustova, Darya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2015-03-01

    This study proposes a new objective metric for video quality assessment. It predicts the impact of technical quality parameters relevant to visual discomfort on human perception. The proposed metric is based on a 3-level color scale: (1) Green - not annoying, (2) Orange - annoying but acceptable, (3) Red - not acceptable. Therefore, each color category reflects viewers' judgment based on stimulus acceptability and induced visual annoyance. The boundary between the "Green" and "Orange" categories defines the visual annoyance threshold, while the boundary between the "Orange" and "Red" categories defines the acceptability threshold. Once the technical quality parameters are measured, they are compared to perceptual thresholds. Such comparison allows estimating the quality of the 3D video sequence. Besides, the proposed metric is adjustable to service or production requirements by changing the percentage of acceptability and/or visual annoyance. The performance of the metric is evaluated in a subjective experiment that uses three stereoscopic scenes. Five view asymmetries with four degradation levels were introduced into initial test content. The results demonstrate high correlations between subjective scores and objective predictions for all view asymmetries.

  19. Wavelet based image quality self measurements

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2010-04-01

    Noise in general is considered to be degradation in image quality. Moreover image quality is measured based on the appearance of the image edges and their clarity. Most of the applications performance is affected by image quality and level of different types of degradation. In general measuring image quality and identifying the type of noise or degradation is considered to be a key factor in raising the applications performance, this task can be very challenging. Wavelet transform now a days, is widely used in different applications. These applications are mostly benefiting from the wavelet localisation in the frequency domain. The coefficients of the high frequency sub-bands in wavelet domain are represented by Laplace histogram. In this paper we are proposing to use the Laplace distribution histogram to measure the image quality and also to identify the type of degradation affecting the given image. Image quality and the level of degradation are mostly measured using a reference image with reasonable quality. The discussed Laplace distribution histogram provides a self testing measurement for the quality of the image. This measurement is based on constructing the theoretical Laplace distribution histogram of the high frequency wavelet sub-band. This construction is based on the actual standard deviation, then to be compared with the actual Laplace distribution histogram. The comparison is performed using histogram intersection method. All the experiments are performed using the extended Yale database.

  20. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  1. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance. PMID:27468398

  2. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression. PMID:23589187

  3. Acceptance test report for the Tank 241-C-106 in-tank imaging system

    SciTech Connect

    Pedersen, L.T.

    1998-05-22

    This document presents the results of Acceptance Testing of the 241-C-106 in-tank video camera imaging system. The purpose of this imaging system is to monitor the Project W-320 sluicing of Tank 241-C-106. The objective of acceptance testing of the 241-C-106 video camera system was to verify that all equipment and components function in accordance with procurement specification requirements and original equipment manufacturer`s (OEM) specifications. This document reports the results of the testing.

  4. Cognitive issues in image quality measurement

    NASA Astrophysics Data System (ADS)

    de Ridder, Huib

    2001-01-01

    Designers of imaging systems, image processing algorithms, etc., usually take for granted that methods for assessing perceived image quality produce unbiased estimates of the viewers' quality impression. Quality judgments, however, are affected by the judgment strategies induced by the experimental procedures. In this paper the results of two experiments are presented illustrating the influence judgment strategies can have on quality judgments. The first experiment concerns contextual effects due to the composition of the stimulus sets. Subjects assessed the sharpness of two differently composed sets of blurred versions of one static image. The sharpness judgments for the blurred images present in both stimulus sets were found to be dependent on the composition of the set as well as the scaling technique employed. In the second experiment subjects assessed either the overall quality or the overall impairment of manipulated and standard JPEG-coded images containing two main artifacts. The results indicate a systematic different between the quality and impairment judgments that could be interpreted as instruction-based different weighting of the two artifacts. Again, some influence of scaling techniques was observed. The results of both experiments underscore the important role judgment strategies play in the psychophysical evaluation of image quality. Ignoring this influence on quality judgments may lead to invalid conclusions about the viewers' impression of image quality.

  5. Phase congruency assesses hyperspectral image quality

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhong, Cheng

    2012-10-01

    Blind image quality assessment (QA) is a tough task especially for hyperspectral imagery which is degraded by noise, distortion, defocus, and other complex factors. Subjective hyperspectral imagery QA methods are basically measured the degradation of image from human perceptual visual quality. As the most important image quality measurement features, noise and blur, determined the image quality greatly, are employed to predict the objective hyperspectral imagery quality of each band. We demonstrate a novel no-reference hyperspectral imagery QA model based on phase congruency (PC), which is a dimensionless quantity and provides an absolute measure of the significance of feature point. First, Log Gabor wavelet is used to calculate the phase congruency of frequencies of each band image. The relationship between noise and PC can be derived from above transformation under the assumption that noise is additive. Second, PC focus measure evaluation model is proposed to evaluate blur caused by different amounts of defocus. The ratio and mean factors of edge blur level and noise is defined to assess the quality of each band image. This image QA method obtains excellent correlation with subjective image quality score without any reference. Finally, the PC information is utilized to improve the quality of some bands images.

  6. 46 CFR 10.409 - Coast Guard-accepted Quality Standard System (QSS) organizations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Coast Guard-accepted Quality Standard System (QSS) organizations. 10.409 Section 10.409 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MERCHANT MARINER CREDENTIAL Training Courses and Programs § 10.409 Coast...

  7. WebCT--The Quasimoderating Effect of Perceived Affective Quality on an Extending Technology Acceptance Model

    ERIC Educational Resources Information Center

    Sanchez-Franco, Manuel J.

    2010-01-01

    Perceived affective quality is an attractive area of research in Information System. Specifically, understanding the intrinsic and extrinsic individual factors and interaction effects that influence Information and Communications Technology (ICT) acceptance and adoption--in higher education--continues to be a focal interest in learning research.…

  8. Factors Affecting the Quality of Life and the Illness Acceptance of Pregnant Women with Diabetes

    PubMed Central

    Bień, Agnieszka; Rzońca, Ewa; Kańczugowska, Angelika; Iwanowicz-Palus, Grażyna

    2015-01-01

    The paper contains an analysis of the factors affecting the quality of life (QoL) and the illness acceptance of diabetic pregnant women. The study was performed between January and April, 2013. It included 114 pregnant women with diabetes, hospitalized in the High Risk Pregnancy Wards of several hospitals in Lublin, Poland. The study used a diagnostic survey with questionnaires. The research instruments used were: The WHOQOL-Bref questionnaire and the Acceptance of Illness Scale (AIS). The women’s general quality of life was slightly higher than their perceived general health. A higher quality of life was reported by women with a very good financial standing, very good perceived health, moderate self-reported knowledge of diabetes, and also by those only treated with diet and stating that the illness did not interfere with their lives (p < 0.05). Women with a very good financial standing (p < 0.009), high self-reported health (p < 0.002), and those treated with by means of a diet (p < 0.04) had a higher acceptance of illness. A higher acceptance of illness contributes to a higher general quality of life and a better perception of one’s health. PMID:26703697

  9. Retinal image quality assessment using generic features

    NASA Astrophysics Data System (ADS)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  10. Seven challenges for image quality research

    NASA Astrophysics Data System (ADS)

    Chandler, Damon M.; Alam, Md M.; Phan, Thien D.

    2014-02-01

    Image quality assessment has been a topic of recent intense research due to its usefulness in a wide variety of applications. Owing in large part to efforts within the HVEI community, image-quality research has particularly benefited from improved models of visual perception. However, over the last decade, research in image quality has largely shifted from the previous broader objective of gaining a better understanding of human vision, to the current limited objective of better fitting the available ground-truth data. In this paper, we discuss seven open challenges in image quality research. These challenges stem from lack of complete perceptual models for: natural images; suprathreshold distortions; interactions between distortions and images; images containing multiple and nontraditional distortions; and images containing enhancements. We also discuss challenges related to computational efficiency. The objective of this paper is not only to highlight the limitations in our current knowledge of image quality, but to also emphasize the need for additional fundamental research in quality perception.

  11. Combined terahertz imaging system for enhanced imaging quality

    NASA Astrophysics Data System (ADS)

    Dolganova, Irina N.; Zaytsev, Kirill I.; Metelkina, Anna A.; Yakovlev, Egor V.; Karasik, Valeriy E.; Yurchenko, Stanislav O.

    2016-06-01

    An improved terahertz (THz) imaging system is proposed for enhancing image quality. Imaging scheme includes THz source and detection system operated in active mode as well as in passive one. In order to homogeneously illuminate the object plane the THz reshaper is proposed. The form and internal structure of the reshaper were studied by the numerical simulation. Using different test-objects we compare imaging quality in active and passive THz imaging modes. Imaging contrast and modulation transfer functions in active and passive imaging modes show drawbacks of them in high and low spatial frequencies, respectively. The experimental results confirm the benefit of combining both imaging modes into hybrid one. The proposed algorithm of making hybrid THz image is an effective approach of retrieving maximum information about the remote object.

  12. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  13. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  14. Perceptual image quality: Effects of tone characteristics

    PubMed Central

    Delahunt, Peter B.; Zhang, Xuemei; Brainard, David H.

    2007-01-01

    Tone mapping refers to the conversion of luminance values recorded by a digital camera or other acquisition device, to the luminance levels available from an output device, such as a monitor or a printer. Tone mapping can improve the appearance of rendered images. Although there are a variety of algorithms available, there is little information about the image tone characteristics that produce pleasing images. We devised an experiment where preferences for images with different tone characteristics were measured. The results indicate that there is a systematic relation between image tone characteristics and perceptual image quality for images containing faces. For these images, a mean face luminance level of 46–49 CIELAB L* units and a luminance standard deviation (taken over the whole image) of 18 CIELAB L* units produced the best renderings. This information is relevant for the design of tone-mapping algorithms, particularly as many images taken by digital camera users include faces. PMID:17235365

  15. a User-Driven Selection of Vgi Based on Minimum Acceptable Quality Levels

    NASA Astrophysics Data System (ADS)

    Bordogna, G.; Carrara, P.; Criscuolo, L.; Pepe, M.; Rampini, A.

    2015-08-01

    Despite Volunteered Geographic Information (VGI) activities are now extremely helpful in a number of scientific applications, researchers and decision makers oppose some resistance to the usage of volunteered contributions, due to quality issues. Several methods and workflows have been proposed to face quality issues in different VGI projects, usually built ad-hoc for specific datasets, thus resulting neither extensible nor transferable. In order to overcome this weakness, the authors propose to perform an user-driven assessment on VGI items in order to filter only those that satisfy minimally acceptable quality levels defined according to their specific quality requirements and project goals. In the present work the users, i.e., information consumers, are seen as decision makers and are allowed to set the minimum acceptable quality levels Thus the approach proposes a user driven assessment of the fitness for use of VGI items. The paper first briefly presents a view on VGI components and suitable quality indices, then it describes a logic architecture for managing them and for enabling a querying mechanism to the datasets. The approach is finally exemplified with a case study simulation.

  16. Image Quality Ranking Method for Microscopy.

    PubMed

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E; Hänninen, Pekka E

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  17. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  18. Image Quality Ranking Method for Microscopy

    NASA Astrophysics Data System (ADS)

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-07-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics.

  19. End-to-end image quality assessment

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  20. MIQM: a multicamera image quality measure.

    PubMed

    Solh, Mashhour; AlRegib, Ghassan

    2012-09-01

    Although several subjective and objective quality assessment methods have been proposed in the literature for images and videos from single cameras, no comparable effort has been devoted to the quality assessment of multicamera images. With the increasing popularity of multiview applications, quality assessment of multicamera images and videos is becoming fundamental to the development of these applications. Image quality is affected by several factors, such as camera configuration, number of cameras, and the calibration process. In order to develop an objective metric specifically designed for multicamera systems, we identified and quantified two types of visual distortions in multicamera images: photometric distortions and geometric distortions. The relative distortion between individual camera scenes is a major factor in determining the overall perceived quality. In this paper, we show that such distortions can be translated into luminance, contrast, spatial motion, and edge-based structure components. We propose three different indices that can quantify these components. We provide examples to demonstrate the correlation among these components and the corresponding indices. Then, we combine these indices into one multicamera image quality measure (MIQM). Results and comparisons with other measures, such as peak signal-to noise ratio, mean structural similarity, and visual information fidelity show that MIQM outperforms other measures in capturing the perceptual fidelity of multicamera images. Finally, we verify the results against subjective evaluation. PMID:22645264

  1. No-reference stereoscopic image quality assessment

    NASA Astrophysics Data System (ADS)

    Akhter, Roushain; Parvez Sazzad, Z. M.; Horita, Y.; Baltes, J.

    2010-02-01

    Display of stereo images is widely used to enhance the viewing experience of three-dimensional imaging and communication systems. In this paper, we propose a method for estimating the quality of stereoscopic images using segmented image features and disparity. This method is inspired by the human visual system. We believe the perceived distortion and disparity of any stereoscopic display is strongly dependent on local features, such as edge (non-plane) and non-edge (plane) areas. Therefore, a no-reference perceptual quality assessment is developed for JPEG coded stereoscopic images based on segmented local features of artifacts and disparity. Local feature information such as edge and non-edge area based relative disparity estimation, as well as the blockiness and the blur within the block of images are evaluated in this method. Two subjective stereo image databases are used to evaluate the performance of our method. The subjective experiments results indicate our model has sufficient prediction performance.

  2. Effect of sourdough on quality and acceptability of wheat flour tortillas.

    PubMed

    Ontiveros-Martínez, M del Refugio; Ochoa-Martínez, L Araceli; González-Herrera, Silvia M; Delgado-Licon, Efren; Bello-Pérez, L Arturo; Morales-Castro, Juliana

    2011-01-01

    As an alternative on the search for functional food products, this study evaluated the use of sourdough in the preparation of wheat flour tortillas. The sourdough was elaborated with Lactobacillus sanfranciscensis and the wheat flour tortillas were prepared with different concentrations of mother sponge (5%, 15%, and 25%) and fermentation times (1 and 3 h) at room temperature (25 ± 2 °C). Quality (diameter, height, color, pH, stretchability scores, and Kramer shear cell results) of wheat tortillas was evaluated after 24 h of preparation. The mother sponge concentration and fermentation time affected some quality parameters and acceptability properties (taste, aroma, color, opacity, and rollability). In addition, the sourdough tortillas had higher stretchability values than control tortillas. Since most of the prepared sourdough tortillas had acceptability values similar to those of tortilla controls, the introduction of sourdough is a viable means to incorporate additional nutritional and nutraceutical value into wheat tortillas. PMID:22416689

  3. Rendered virtual view image objective quality assessment

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Li, Xiangchun; Zhang, Yi; Peng, Kai

    2013-08-01

    The research on rendered virtual view image (RVVI) objective quality assessment is important for integrated imaging system and image quality assessment (IQA). Traditional IQA algorithms cannot be applied directly on the system receiver-side due to interview displacement and the absence of original reference. This study proposed a block-based neighbor reference (NbR) IQA framework for RVVI IQA. Neighbor views used for rendering are employed for quality assessment in the proposed framework. A symphonious factor handling noise and interview displacement is defined and applied to evaluate the contribution of the obtained quality index in each block pair. A three-stage experiment scheme is also presented to testify the proposed framework and evaluate its homogeneity performance when comparing to full reference IQA. Experimental results show the proposed framework is useful in RVVI objective quality assessment at system receiver-side and benchmarking different rendering algorithms.

  4. Continuous assessment of perceptual image quality

    NASA Astrophysics Data System (ADS)

    Hamberg, Roelof; de Ridder, Huib

    1995-12-01

    The study addresses whether subjects are able to assess the perceived quality of an image sequence continuously. To this end, a new method for assessing time-varying perceptual image quality is presented by which subjects continuously indicate the perceived strength of image quality by moving a slider along a graphical scale. The slider's position on this scale is sampled every second. In this way, temporal variations in quality can be monitored quantitatively, and a means is provided by which differences between, for example, alternative transmission systems can be analyzed in an informative way. The usability of this method is illustrated by an experiment in which, for a period of 815 s, subjects assessed the quality of still pictures comprising time-varying degrees of sharpness. Copyright (c) 1995 Optical Society of America

  5. Quality measures in applications of image restoration.

    PubMed

    Kriete, A; Naim, M; Schafer, L

    2001-01-01

    We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations. PMID:11587324

  6. Image quality evaluation using moving targets

    NASA Astrophysics Data System (ADS)

    Artmann, Uwe

    2013-03-01

    The basic concept of testing a digital imaging device is to reproduce a known target and to analyze the resulting image. This semi-reference approach can be used for various different aspects of image quality. Each part of the imaging chain can have an influence on the results: lens, sensor, image processing and the target itself. The results are valid only for the complete system. If we want to test a single component, we have to make sure that we change only one and keep all others constant. When testing mobile imaging devices, we run into the problem that hardly anything can be manually controlled by the tester. Manual exposure control is not available for most devices, the focus cannot be influenced and hardly any settings for the image processing are available. Due to the limitations in the hardware, the image pipeline in the digital signal processor (DSP) of mobile imaging devices is a critical part of the image quality evaluation. The processing power of the DSPs allows sharpening, tonal correction and noise reduction to be non-linear and adaptive. This makes it very hard to describe the behavior for an objective image quality evaluation. The image quality is highly influenced by the signal processing for noise and resolution and the processing is the main reason for the loss of low contrast, _ne details, the so called texture blur. We present our experience to describe the image processing in more detail. All standardized test methods use a defined chart and require, that the chart and the camera are not moved in any way during test. In this paper, we present our results investigating the influence of chart movement during the test. Different structures, optimized for different aspects of image quality evaluation, are moved with a defined speed during the capturing process. The chart movement will change the input for the signal processing depending on the speed of the target during the test. The basic theoretical changes in the image will be the

  7. Propagation, structural similarity, and image quality

    NASA Astrophysics Data System (ADS)

    Pérez, Jorge; Mas, David; Espinosa, Julián; Vázquez, Carmen; Illueca, Carlos

    2012-06-01

    Retinal image quality is usually analysed through different parameters typical from instrumental optics, i.e, PSF, MTF and wavefront aberrations. Although these parameters are important, they are hard to translate to visual quality parameters since human vision exhibits some tolerance to certain aberrations. This is particularly important in postsurgery eyes, where non-common aberration are induced and their effects on the final image quality is not clear. Natural images usually show a strong dependency between one point and its neighbourhood. This fact helps to the image interpretation and should be considered when determining the final image quality. The aim of this work is to propose an objective index which allows comparing natural images on the retina and, from them, to obtain relevant information abut the visual quality of a particular subject. To this end, we propose a individual eye modelling. The morphological data of the subject's eye are considered and the light propagation through the ocular media is calculated by means of a Fourier-transform-based method. The retinal PSF so obtained is convolved with the natural scene under consideration and the obtained image is compared with the ideal one by using the structural similarity index. The technique is applied on 2 eyes with a multifocal corneal profile (PresbyLasik) and can be used to determine the real extension of the achieved pseudoaccomodation.

  8. Image Acquisition and Quality in Digital Radiography.

    PubMed

    Alexander, Shannon

    2016-09-01

    Medical imaging has undergone dramatic changes and technological breakthroughs since the introduction of digital radiography. This article presents information on the development of digital radiography and types of digital radiography systems. Aspects of image quality and radiation exposure control are highlighted as well. In addition, the article includes related workplace changes and medicolegal considerations in the digital radiography environment. PMID:27601691

  9. Influence of acquisition parameters on MV-CBCT image quality.

    PubMed

    Gayou, Olivier

    2012-01-01

    The production of high quality pretreatment images plays an increasing role in image-guided radiotherapy (IGRT) and adaptive radiation therapy (ART). Megavoltage cone-beam computed tomography (MV-CBCT) is the simplest solution of all the commercially available volumetric imaging systems for localization. It also suffers the most from relatively poor contrast due to the energy range of the imaging photons. Several avenues can be investigated to improve MV-CBCT image quality while maintaining an acceptable patient exposure: beam generation, detector technology, reconstruction parameters, and acquisition parameters. This article presents a study of the effects of the acquisition scan length and number of projections of a Siemens Artiste MV-CBCT system on image quality within the range provided by the manufacturer. It also discusses other aspects not related to image quality one should consider when selecting an acquisition protocol. Noise and uniformity were measured on the image of a cylindrical water phantom. Spatial resolution was measured using the same phantom half filled with water to provide a sharp water/air interface to derive the modulation transfer function (MTF). Contrast-to-noise ratio (CNR) was measured on a pelvis-shaped phantom with four inserts of different electron densities relative to water (1.043, 1.117, 1.513, and 0.459). Uniformity was independent of acquisition protocol. Noise decreased from 1.96% to 1.64% when the total number of projections was increased from 100 to 600 for a total exposure of 13.5 MU. The CNR showed a ± 5% dependence on the number of projections and 10% dependence on the scan length. However, these variations were not statistically significant. The spatial resolution was unaffected by the arc length or the sampling rate. Acquisition parameters have little to no effect on the image quality of the MV-CBCT system within the range of parameters available on the system. Considerations other than image quality, such as memory

  10. Holographic projection with higher image quality.

    PubMed

    Qu, Weidong; Gu, Huarong; Tan, Qiaofeng

    2016-08-22

    The spatial resolution limited by the size of the spatial light modulator (SLM) in the holographic projection can hardly be increased, and speckle noise always appears to induce the degradation of image quality. In this paper, the holographic projection with higher image quality is presented. The spatial resolution of the reconstructed image is 2 times of that of the existing holographic projection, and speckles are suppressed well at the same time. Finally, the effectiveness of the holographic projection is verified in experiments. PMID:27557197

  11. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  12. Perceptual image quality and telescope performance ranking

    NASA Astrophysics Data System (ADS)

    Lentz, Joshua K.; Harvey, James E.; Marshall, Kenneth H.; Salg, Joseph; Houston, Joseph B.

    2010-08-01

    Launch Vehicle Imaging Telescopes (LVIT) are expensive, high quality devices intended for improving the safety of vehicle personnel, ground support, civilians, and physical assets during launch activities. If allowed to degrade from the combination of wear, environmental factors, and ineffective or inadequate maintenance, these devices lose their ability to provide adequate quality imagery to analysts to prevent catastrophic events such as the NASA Space Shuttle, Challenger, accident in 1986 and the Columbia disaster of 2003. A software tool incorporating aberrations and diffraction that was developed for maintenance evaluation and modeling of telescope imagery is presented. This tool provides MTF-based image quality metric outputs which are correlated to ascent imagery analysts' perception of image quality, allowing a prediction of usefulness of imagery which would be produced by a telescope under different simulated conditions.

  13. Color image attribute and quality measurements

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Panetta, Karen; Agaian, Sos

    2014-05-01

    Color image quality measures have been used for many computer vision tasks. In practical applications, the no-reference (NR) measures are desirable because reference images are not always accessible. However, only limited success has been achieved. Most existing NR quality assessments require that the types of image distortion is known a-priori. In this paper, three NR color image attributes: colorfulness, sharpness and contrast are quantified by new metrics. Using these metrics, a new Color Quality Measure (CQM), which is based on the linear combination of these three color image attributes, is presented. We evaluated the performance of several state-of-the-art no-reference measures for comparison purposes. Experimental results demonstrate the CQM correlates well with evaluations obtained from human observers and it operates in real time. The results also show that the presented CQM outperforms previous works with respect to ranking image quality among images containing the same or different contents. Finally, the performance of CQM is independent of distortion types, which is demonstrated in the experimental results.

  14. Computerized measurement of mammographic display image quality

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.; Sivarudrappa, Mahesh; Roehrig, Hans

    1999-05-01

    Since the video monitor is widely believed to be the weak link in the imaging chain, it is critical, to include it in the total image quality evaluation. Yet, most physical measurements of mammographic image quality are presently limited to making measurements on the digital matrix, not the displayed image. A method is described to quantitatively measure image quality of mammographic monitors using ACR phantom-based test patterns. The image of the test pattern is digitized using a charge coupled device (CCD) camera, and the resulting image file is analyzed by an existing phantom analysis method (Computer Analysis of Mammography Phantom Images, CAMPI). The new method is called CCD-CAMPI and it yields the Signal-to-Noise-Ratio (SNR) for an arbitrary target shape (e.g., speck, mass or fiber). In this work we show the feasibility of this idea for speck targets. Also performed were physical image quality characterization of the monitor (so-called Fourier measures) and analysis by another template matching method due to Tapiovaara and Wagner (TW) which is closely related to CAMPI. The methods were applied to a MegaScan monitor. Test patterns containing a complete speck group superposed on a noiseless background were displayed on the monitor and a series of CCD images were acquired. These images were subjected to CCD-CAMPI and TW analyses. It was found that the SNR values for the CCD-CAMPI method tracked those of the TW method, although the latter measurements were considerably less precise. The TW SNR measure was also about 25% larger than the CCD-CAMPI determination. These differences could be understood from the manner in which the two methods evaluate the noise. Overall accuracy of the CAMPI SNR determination was 4.1% for single images when expressed as a coefficient of variance. While the SNR measures are predictable from the Fourier measures the number of images and effort required is prohibitive and it is not suited to Quality Control (QC). Unlike the Fourier

  15. Comprehensive quality assurance phantom for cardiovascular imaging systems

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Jan P.

    1998-07-01

    With the advent of high heat loading capacity x-ray tubes, high frequency inverter type generators, and the use of spectral shaping filters, the automatic brightness/exposure control (ABC) circuit logic employed in the new generation of angiographic imaging equipment has been significantly reprogrammed. These new angiographic imaging systems are designed to take advantage of the power train capabilities to yield higher contrast images while maintaining, or lower, the patient exposure. Since the emphasis of the imaging system design has been significantly altered, the system performance parameters one is interested and the phantoms employed for the quality assurance must also change in order to properly evaluate the imaging capability of the cardiovascular imaging systems. A quality assurance (QA) phantom has been under development in this institution and was submitted to various interested organizations such as American Association of Physicists in Medicine (AAPM), Society for Cardiac Angiography & Interventions (SCA&I), and National Electrical Manufacturers Association (NEMA) for their review and input. At the same time, in an effort to establish a unified standard phantom design for the cardiac catheterization laboratories (CCL), SCA&I and NEMA have formed a joint work group in early 1997 to develop a suitable phantom. The initial QA phantom design has since been accepted to serve as the base phantom by the SCA&I- NEMA Joint Work Group (JWG) from which a comprehensive QA Phantom is being developed.

  16. A database for spectral image quality

    NASA Astrophysics Data System (ADS)

    Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve

    2015-01-01

    We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.

  17. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Rhodes, Laura A.; Davies, Andrew G.

    2015-03-01

    Dynamic X-ray imaging systems are used for interventional cardiac procedures to treat coronary heart disease. X-ray settings are controlled automatically by specially-designed X-ray dose control mechanisms whose role is to ensure an adequate level of image quality is maintained with an acceptable radiation dose to the patient. Current commonplace dose control designs quantify image quality by performing a simple technical measurement directly from the image. However, the utility of cardiac X-ray images is in their interpretation by a cardiologist during an interventional procedure, rather than in a technical measurement. With the long term goal of devising a clinically-relevant image quality metric for an intelligent dose control system, we aim to investigate the relationship of image noise with clinical professionals' perception of dynamic image sequences. Computer-generated noise was added, in incremental amounts, to angiograms of five different patients selected to represent the range of adult cardiac patient sizes. A two alternative forced choice staircase experiment was used to determine the amount of noise which can be added to a patient image sequences without changing image quality as perceived by clinical professionals. Twenty-five viewing sessions (five for each patient) were completed by thirteen observers. Results demonstrated scope to increase the noise of cardiac X-ray images by up to 21% +/- 8% before it is noticeable by clinical professionals. This indicates a potential for 21% radiation dose reduction since X-ray image noise and radiation dose are directly related; this would be beneficial to both patients and personnel.

  18. Decision theory applied to image quality control in radiology

    PubMed Central

    Lessa, Patrícia S; Caous, Cristofer A; Arantes, Paula R; Amaro, Edson; de Souza, Fernando M Campello

    2008-01-01

    Background The present work aims at the application of the decision theory to radiological image quality control (QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Methods Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Results Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. Conclusion The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision. PMID:19014545

  19. Blind image quality assessment through anisotropy.

    PubMed

    Gabarda, Salvador; Cristóbal, Gabriel

    2007-12-01

    We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Rényi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, in-focus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio. PMID:18059913

  20. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  1. Filtering Chromatic Aberration for Wide Acceptance Angle Electrostatic Lenses II--Experimental Evaluation and Software-Based Imaging Energy Analyzer.

    PubMed

    Fazekas, Ádám; Daimon, Hiroshi; Matsuda, Hiroyuki; Tóth, László

    2016-03-01

    Here, the experimental results of the method of filtering the effect of chromatic aberration for wide acceptance angle electrostatic lens-based system are described. This method can eliminate the effect of chromatic aberration from the images of a measured spectral image sequence by determining and removing the effect of higher and lower kinetic energy electrons on each different energy image, which leads to significant improvement of image and spectral quality. The method is based on the numerical solution of a large system of linear equations and equivalent with a multivariate strongly nonlinear deconvolution method. A matrix whose elements describe the strongly nonlinear chromatic aberration-related transmission function of the lens system acts on the vector of the ordered pixels of the distortion free spectral image sequence, and produces the vector of the ordered pixels of the measured spectral image sequence. Since the method can be applied not only on 2D real- and $k$ -space diffraction images, but also along a third dimension of the image sequence that is along the optical or in the 3D parameter space, the energy axis, it functions as a software-based imaging energy analyzer (SBIEA). It can also be applied in cases of light or other type of optics for different optical aberrations and distortions. In case of electron optics, the SBIEA method makes possible the spectral imaging without the application of any other energy filter. It is notable that this method also eliminates the disturbing background significantly in the present investigated case of reflection electron energy loss spectra. It eliminates the instrumental effects and makes possible to measure the real physical processes better. PMID:26863662

  2. Lessions learned in WISE image quality

    NASA Astrophysics Data System (ADS)

    Kendall, Martha; Duval, Valerie G.; Larsen, Mark F.; Heinrichsen, Ingolf H.; Esplin, Roy W.; Shannon, Mark; Wright, Edward L.

    2010-08-01

    The Wide-Field Infrared Survey Explorer (WISE) mission launched in December of 2009 is a true success story. The mission is performing beyond expectations on-orbit and maintained cost and schedule throughout. How does such a thing happen? A team constantly focused on mission success is a key factor. Mission success is more than a program meeting its ultimate science goals; it is also meeting schedule and cost goals to avoid cancellation. The WISE program can attribute some of its success in achieving the image quality needed to meet science goals to lessons learned along the way. A requirement was missed in early decomposition, the absence of which would have adversely affected end-to-end system image quality. Fortunately, the ability of the cross-organizational team to focus on fixing the problem without pointing fingers or waiting for paperwork was crucial in achieving a timely solution. Asking layman questions early in the program could have revealed requirement flowdown misunderstandings between spacecraft control stability and image processing needs. Such is the lesson learned with the WISE spacecraft Attitude Determination & Control Subsystem (ADCS) jitter control and the image data reductions needs. Spacecraft motion can affect image quality in numerous ways. Something as seemingly benign as different terminology being used by teammates in separate groups working on data reduction, spacecraft ADCS, the instrument, mission operations, and the science proved to be a risk to system image quality. While the spacecraft was meeting the allocated jitter requirement , the drift rate variation need was not being met. This missing need was noticed about a year before launch and with a dedicated team effort, an adjustment was made to the spacecraft ADCS control. WISE is meeting all image quality requirements on-orbit thanks to a diligent team noticing something was missing before it was too late and applying their best effort to find a solution.

  3. Subjective matters: from image quality to image psychology

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; De Ridder, Huib

    2013-03-01

    From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.

  4. Measuring image quality in overlapping areas of panoramic composed images

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Escofet, Jaume

    2012-06-01

    Several professional photographic applications uses the merging of consecutive overlapping images in order to obtain bigger files by means of stitching techniques or extended field of view (FOV) for panoramic images. All of those applications share the fact that the final composed image is obtained by overlapping the neighboring areas of consecutive individual images taken as a mosaic or a series of tiles over the scene, from the same point of view. Any individual image taken with a given lens can carry residual aberrations and several of them will affect more probably the borders of the image frame. Furthermore, the amount of distortion aberration present in the images of a given lens will be reversed in position for the two overlapping areas of a pair of consecutive takings. Finally, the different images used in composing the final one have corresponding overlapping areas taken with different perspective. From all the previously stated can be derived that the software employed must remap all the pixel information in order to resize and match image features in those overlapping areas, providing a final composed image with the desired perspective projection. The work presented analyse two panoramic format images taken with a pair of lenses and composed by means of a state of the art stitching software. Then, a series of images are taken to cover an FOV three times the original lens FOV, the images are merged by means of a software of common use in professional panoramic photography and the final image quality is evaluated through a series of targets positioned in strategic locations over the whole taking field of view. That allows measuring the resulting Resolution and Modulation Transfer Function (MTF). The results are shown compared with the previous measures on the original individual images.

  5. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  6. Geometric assessment of image quality using digital image registration techniques

    NASA Technical Reports Server (NTRS)

    Tisdale, G. E.

    1976-01-01

    Image registration techniques were developed to perform a geometric quality assessment of multispectral and multitemporal image pairs. Based upon LANDSAT tapes, accuracies to a small fraction of a pixel were demonstrated. Because it is insensitive to the choice of registration areas, the technique is well suited to performance in an automatic system. It may be implemented at megapixel-per-second rates using a commercial minicomputer in combination with a special purpose digital preprocessor.

  7. Image quality measures and their performance

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.; Chen, Si-Yuan

    1994-01-01

    A number of quality measures are evaluated for gray scale image compression. They are all bivariate exploiting the differences between corresponding pixels in the original and degraded images. It is shown that although some numerical measures correlate well with the observers' response for a given compression technique, they are not reliable for an evaluation across different techniques. The two graphical measures (histograms and Hosaka plots), however, can be used to appropriately specify not only the amount, but also the type of degradation in reconstructed images.

  8. Acceptability of quality reporting and pay for performance among primary health centers in Lebanon.

    PubMed

    Saleh, Shadi S; Alameddine, Mohamad S; Natafgi, Nabil M

    2013-01-01

    Primary health care (PHC) is emphasized as the cornerstone of any health care system. Enhancing PHC performance is considered a strategy to enhance effective and equitable access to care. This study assesses the acceptability of and factors associated with quality reporting among PHC centers (PHCCs) in Lebanon. The managers of 132 Lebanese Ministry of Health PHCCs were surveyed using a cross-sectional design. Managers' willingness to report quality, participate in comparative quality assessments, and endorse pay-for-performance schemes was evaluated. Collected data were matched to the infrastructural characteristics and services database. Seventy-six percent of managers responded to the questionnaire, 93 percent of whom were willing to report clinical performance. Most expressed strong support for peer-performance comparison and pay-for-performance schemes. Willingness to report was negatively associated with the religious affiliation of centers and presence of health care facilities in the catchment area and favorably associated with use of information systems and the size of population served. The great willingness of PHCC managers to employ quality-enhancing initiatives flags a policy priority for PHC stakeholders to strengthen PHCC infrastructure and to enable reporting in an easy, standardized, and systematic way. Enhancing equity necessitates education and empowerment of managers in remote areas and those managing religiously affiliated centers. PMID:24397238

  9. Quality evaluation of fruit by hyperspectral imaging

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter presents new applications of hyperspectral imaging for measuring the optical properties of fruits and assessing their quality attributes. A brief overview is given of current techniques for measuring optical properties of turbid and opaque biological materials. Then a detailed descripti...

  10. Image Quality Indicator for Infrared Inspections

    NASA Technical Reports Server (NTRS)

    Burke, Eric

    2011-01-01

    The quality of images generated during an infrared thermal inspection depends on many system variables, settings, and parameters to include the focal length setting of the IR camera lens. If any relevant parameter is incorrect or sub-optimal, the resulting IR images will usually exhibit inherent unsharpness and lack of resolution. Traditional reference standards and image quality indicators (IQIs) are made of representative hardware samples and contain representative flaws of concern. These standards are used to verify that representative flaws can be detected with the current IR system settings. However, these traditional standards do not enable the operator to quantify the quality limitations of the resulting images, i.e. determine the inherent maximum image sensitivity and image resolution. As a result, the operator does not have the ability to optimize the IR inspection system prior to data acquisition. The innovative IQI described here eliminates this limitation and enables the operator to objectively quantify and optimize the relevant variables of the IR inspection system, resulting in enhanced image quality with consistency and repeatability in the inspection application. The IR IQI consists of various copper foil features of known sizes that are printed on a dielectric non-conductive board. The significant difference in thermal conductivity between the two materials ensures that each appears with a distinct grayscale or brightness in the resulting IR image. Therefore, the IR image of the IQI exhibits high contrast between the copper features and the underlying dielectric board, which is required to detect the edges of the various copper features. The copper features consist of individual elements of various shapes and sizes, or of element-pairs of known shapes and sizes and with known spacing between the elements creating the pair. For example, filled copper circles with various diameters can be used as individual elements to quantify the image sensitivity

  11. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  12. The influence of tinnitus acceptance on the quality of life and psychological distress in patients with chronic tinnitus

    PubMed Central

    Riedl, David; Rumpold, Gerhard; Schmidt, Annette; Zorowka, Patrick G.; Bliem, Harald R.; Moschen, Roland

    2015-01-01

    Recent findings show the importance of acceptance in the treatment of chronic tinnitus. So far, very limited research investigating the different levels of tinnitus acceptance has been conducted. The aim of this study was to investigate the quality of life (QoL) and psychological distress in patients with chronic tinnitus who reported different levels of tinnitus acceptance. The sample consisted of outpatients taking part in a tinnitus coping group (n = 97). Correlations between tinnitus acceptance, psychological distress, and QoL were calculated. Receiver operating characteristic (ROC) curves were used to calculate a cutoff score for the German “Tinnitus Acceptance Questionnaire” (CTAQ-G) and to evaluate the screening abilities of the CTAQ-G. Independent sample t-tests were conducted to compare QoL and psychological distress in patients with low tinnitus acceptance and high tinnitus acceptance. A cutoff point for CTAQ-G of 62.5 was defined, differentiating between patients with “low-to-mild tinnitus acceptance” and “moderate-to-high tinnitus acceptance.” Patients with higher levels of tinnitus acceptance reported a significantly higher QoL and lower psychological distress. Tinnitus acceptance plays an important role for patients with chronic tinnitus. Increased levels of acceptance are related to better QoL and less psychological distress. PMID:26356381

  13. Analysis of image quality based on perceptual preference

    NASA Astrophysics Data System (ADS)

    Xue, Liqin; Hua, Yuning; Zhao, Guangzhou; Qi, Yaping

    2007-11-01

    This paper deals with image quality analysis considering the impact of psychological factors involved in assessment. The attributes of image quality requirement were partitioned according to the visual perception characteristics and the preference of image quality were obtained by the factor analysis method. The features of image quality which support the subjective preference were identified, The adequacy of image is evidenced to be the top requirement issues to the display image quality improvement. The approach will be beneficial to the research of the image quality subjective quantitative assessment method.

  14. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  15. Physical measures of image quality in mammography

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.

    1996-04-01

    A recently introduced method for quantitative analysis of images of the American College of Radiology (ACR) mammography accreditation phantom has been extended to include signal- to-noise-ratio (SNR) measurements, and has been applied to survey the image quality of 54 mammography machines from 17 hospitals. Participants sent us phantom images to be evaluated for each mammography machine at their hospital. Each phantom was loaned to us for obtaining images of the wax insert plate on a reference machine at our institution. The images were digitized and analyzed to yield indices that quantified the image quality of the machines precisely. We have developed methods for normalizing for the variation of the individual speck sizes between different ACR phantoms, for the variation of the speck sizes within a microcalcification group, and for variations in overall speeds of the mammography systems. In terms of the microcalcification SNR, the variability of the x-ray machines was 40.5% when no allowance was made for phantom or mAs variations. This dropped to 17.1% when phantom variability was accounted for, and to 12.7% when mAs variability was also allowed for. Our work shows the feasibility of practical, low-cost, objective and accurate evaluations, as a useful adjunct to the present ACR method.

  16. Naturalness and interestingness of test images for visual quality evaluation

    NASA Astrophysics Data System (ADS)

    Halonen, Raisa; Westman, Stina; Oittinen, Pirkko

    2011-01-01

    Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.

  17. Quantitative statistical methods for image quality assessment.

    PubMed

    Dutta, Joyita; Ahn, Sangtae; Li, Quanzheng

    2013-01-01

    Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit). PMID:24312148

  18. Quantitative Statistical Methods for Image Quality Assessment

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Quanzheng

    2013-01-01

    Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit). PMID:24312148

  19. No-reference image quality metric based on image classification

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Lee, Chulhee

    2011-12-01

    In this article, we present a new no-reference (NR) objective image quality metric based on image classification. We also propose a new blocking metric and a new blur metric. Both metrics are NR metrics since they need no information from the original image. The blocking metric was computed by considering that the visibility of horizontal and vertical blocking artifacts can change depending on background luminance levels. When computing the blur metric, we took into account the fact that blurring in edge regions is generally more sensitive to the human visual system. Since different compression standards usually produce different compression artifacts, we classified images into two classes using the proposed blocking metric: one class that contained blocking artifacts and another class that did not contain blocking artifacts. Then, we used different quality metrics based on the classification results. Experimental results show that each metric correlated well with subjective ratings, and the proposed NR image quality metric consistently provided good performance with various types of content and distortions.

  20. Quality and Acceptability of Meat Nuggets with Fresh Aloe vera Gel.

    PubMed

    Rajkumar, V; Verma, Arun K; Patra, G; Pradhan, S; Biswas, S; Chauhan, P; Das, Arun K

    2016-05-01

    Aloe vera has been used worldwide for pharmaceutical, food, and cosmetic industries due to its wide biological activities. However, quality improvement of low fat meat products and their acceptability with added Aloe vera gel (AVG) is scanty. The aim of this study was to explore the feasibility of using fresh AVG on physicochemical, textural, sensory and nutritive qualities of goat meat nuggets. The products were prepared with 0%, 2.5%, and 5% fresh AVG replacing goat meat and were analyzed for proximate composition, physicochemical and textural properties, fatty acid profile and sensory parameters. Changes in lipid oxidation and microbial growth of nuggets were also evaluated over 9 days of refrigerated storage. The results showed that AVG significantly (p<0.05) decreased the pH value and protein content of meat emulsion and nuggets. Product yield was affected at 5% level of gel. Addition of AVG in the formulation significantly affected the values of texture profile analysis. The AVG reduced the lipid oxidation and microbial growth in nuggets during storage. Sensory panelists preferred nuggets with 2.5% AVG over nuggets with 5% AVG. Therefore, AVG up to 2.5% level could be used for quality improvement in goat meat nuggets without affecting its sensorial, textural and nutritive values. PMID:26954177

  1. Quality and Acceptability of Meat Nuggets with Fresh Aloe vera Gel

    PubMed Central

    Rajkumar, V.; Verma, Arun K.; Patra, G.; Pradhan, S.; Biswas, S.; Chauhan, P.; Das, Arun K.

    2016-01-01

    Aloe vera has been used worldwide for pharmaceutical, food, and cosmetic industries due to its wide biological activities. However, quality improvement of low fat meat products and their acceptability with added Aloe vera gel (AVG) is scanty. The aim of this study was to explore the feasibility of using fresh AVG on physicochemical, textural, sensory and nutritive qualities of goat meat nuggets. The products were prepared with 0%, 2.5%, and 5% fresh AVG replacing goat meat and were analyzed for proximate composition, physicochemical and textural properties, fatty acid profile and sensory parameters. Changes in lipid oxidation and microbial growth of nuggets were also evaluated over 9 days of refrigerated storage. The results showed that AVG significantly (p<0.05) decreased the pH value and protein content of meat emulsion and nuggets. Product yield was affected at 5% level of gel. Addition of AVG in the formulation significantly affected the values of texture profile analysis. The AVG reduced the lipid oxidation and microbial growth in nuggets during storage. Sensory panelists preferred nuggets with 2.5% AVG over nuggets with 5% AVG. Therefore, AVG up to 2.5% level could be used for quality improvement in goat meat nuggets without affecting its sensorial, textural and nutritive values. PMID:26954177

  2. Data Quality Objectives and Criteria for Basic Information, Acceptable Uncertainty, and Quality-Assurance and Quality-Control Documentation

    USGS Publications Warehouse

    Granato, Gregory E.; Bank, Fred G.; Cazenas, Patricia A.

    1998-01-01

    The Federal Highway Administration and State transportation agencies have the responsibility of determining and minimizing the effects of highway runoff on water quality; therefore, they have been conducting an extensive program of water-quality monitoring and research during the last 25 years. The objectives and monitoring goals of highway runoff studies have been diverse, because the highway community must address many different questions about the characteristics and impacts of highway runoff. The Federal Highway Administration must establish that available data and procedures that are used to assess and predict pollutant loadings and impacts from highway stormwater runoff are valid, current, and technically supportable. This report examines criteria for evaluating water-quality data and resultant interpretations. The criteria used to determine if data are valid (useful for intended purposes), current, and technically supportable are derived from published materials from the Federal Highway Administration, the U.S. Environmental Protection Agency, the Intergovernmental Task Force on Monitoring Water Quality, the U.S. Geological Survey and from technical experts throughout the U.S. Geological Survey. Water-quality data that are documented to be meaningful, representative, complete, precise, accurate, comparable, and admissible as legal evidence will meet the scientific, engineering, and regulatory needs of highway agencies. Documentation of basic information, such as compatible monitoring objectives and program design features; metadata (when, where, and how data were collected as well as who collected and analyzed the data); ancillary information (explanatory variables and study-site characteristics); and legal requirements are needed to evaluate data. Documentation of sufficient quality-assurance and quality-control information to establish the quality and uncertainty in the data and interpretations also are needed to determine the comparability and utility of

  3. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  4. Image registration for DSA quality enhancement.

    PubMed

    Buzug, T M; Weese, J

    1998-01-01

    A generalized framework for histogram-based similarity measures is presented and applied to the image-enhancement task in digital subtraction angiography (DSA). The class of differentiable, strictly convex weighting functions is identified as suitable weightings of histograms for measuring the degree of clustering that goes along with registration. With respect to computation time, the energy similarity measure is the function of choice for the registration of mask and contrast image prior to subtraction. The robustness of the energy measure is studied for geometrical image distortions like rotation and scaling. Additionally, it is investigated how the histogram binning and inhomogeneous motion inside the templates influence the quality of the similarity measure. Finally, the registration success for the automated procedure is compared with the manually shift-corrected image pair of the head. PMID:9719851

  5. Physicochemical characterization of pure persimmon juice: nutritional quality and food acceptability.

    PubMed

    González, Eva; Vegara, Salud; Martí, Nuria; Valero, Manuel; Saura, Domingo

    2015-03-01

    Technological process for production of non-astringent persimmon (Diospyros kaki Thunb. cv. "Rojo Brillante") juice was described. The degree of fruit ripening expressed as color index (CI) varied between 12.37 and 16.33. Persimmon juice was characterized by determining physicochemical quality parameters as yield, total soluble solids (TSS), pH, titratable acidity (TA), organic acids, and main sugars. A thermal treatment of 90 ºC for 10 s was effective in controlling naturally occurring microorganisms for at least 105 d of storage without significantly affecting production of soluble brown pigments (BPs) and 5-hydroxymethyl furfural (5-HMF), total phenolic compounds (TPC), antioxidant capacity and acceptability of juice by panelists. Storage time affected all and each of the above parameters, reducing BPs, TPC and antioxidant capacity but increasing 5-HMF content. Refrigerated storage enhanced the acceptability of the juices. This information may be used by the juice industry as a starting point for production of pure persimmon juices. PMID:25619747

  6. Objective evaluation of speech signal quality by the prediction of multiple foreground diagnostic acceptability measure attributes.

    PubMed

    Sen, Deep; Lu, W

    2012-05-01

    A methodology is described to objectively diagnose the quality of speech signals by predicting the perceptual detectability of a selected set of distortions. The distortions are a statistically selected subset of the broad number of distortions used in diagnostic acceptability measure (DAM) testing. The justification for such a methodology is established from the analysis of a set of speech signals representing a broad set of distortions and their respective DAM scores. At the heart of the ability to isolate and diagnose the perceptibility of the individual distortions is a physiologically motivated cochlear model. The philosophy and methodology is thus distinct from traditional objective measures that are typically designed to predict mean opinion scores (MOS) using well versed functional psychoacoustic models. Even so, a weighted sum of these objectively predicted set of distortions is able to predict accurate and robust MOS scores, even when the reference speech signals have been subject to the Lombard effect. PMID:22559381

  7. Acceptability of the Conceptions of Higher Education Quality to First Year Students of the Study Field of Pedagogy

    ERIC Educational Resources Information Center

    Žibeniene, Gintaute; Savickiene, Izabela

    2014-01-01

    The article presents which conceptions of higher education quality are most acceptable to first-year students of the study field of pedagogy. It is significant to analyse students' opinions as more than 10 years ago the EU member states agreed that higher education institutions bear responsibility for the quality of higher education. Being members…

  8. Dried fruits quality assessment by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  9. Retinal image quality in the rodent eye.

    PubMed

    Artal, P; Herreros de Tejada, P; Muñoz Tedó, C; Green, D G

    1998-01-01

    Many rodents do not see well. For a target to be resolved by a rat or a mouse, it must subtend a visual angle of a degree or more. It is commonly assumed that this poor spatial resolving capacity is due to neural rather than optical limitations, but the quality of the retinal image has not been well characterized in these animals. We have modified a double-pass apparatus, initially designed for the human eye, so it could be used with rodents to measure the modulation transfer function (MTF) of the eye's optics. That is, the double-pass retinal image of a monochromatic (lambda = 632.8 nm) point source was digitized with a CCD camera. From these double-pass measurements, the single-pass MTF was computed under a variety of conditions of focus and with different pupil sizes. Even with the eye in best focus, the image quality in both rats and mice is exceedingly poor. With a 1-mm pupil, for example, the MTF in the rat had an upper limit of about 2.5 cycles/deg, rather than the 28 cycles/deg one would obtain if the eye were a diffraction-limited system. These images are about 10 times worse than the comparable retinal images in the human eye. Using our measurements of the optics and the published behavioral and electrophysiological contrast sensitivity functions (CSFs) of rats, we have calculated the CSF that the rat would have if it had perfect rather than poor optics. We find, interestingly, that diffraction-limited optics would produce only slight improvement overall. That is, in spite of retinal images which are of very low quality, the upper limit of visual resolution in rodents is neurally determined. Rats and mice seem to have eyes in which the optics and retina/brain are well matched. PMID:9682864

  10. Image quality assessment and human visual system

    NASA Astrophysics Data System (ADS)

    Gao, Xinbo; Lu, Wen; Tao, Dacheng; Li, Xuelong

    2010-07-01

    This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.

  11. Retinal image quality, reading and myopia.

    PubMed

    Collins, Michael J; Buehren, Tobias; Iskander, D Robert

    2006-01-01

    Analysis was undertaken of the retinal image characteristics of the best-spectacle corrected eyes of progressing myopes (n = 20, mean age = 22 years; mean spherical equivalent = -3.84 D) and a control group of emmetropes (n = 20, mean age = 23 years; mean spherical equivalent = 0.00 D) before and after a 2h reading task. Retinal image quality was calculated based upon wavefront measurements taken with a Hartmann-Shack sensor with fixation on both a far (5.5 m) and near (individual reading distance) target. The visual Strehl ratio based on the optical transfer function (VSOTF) was significantly worse for the myopes prior to reading for both the far (p = 0.01) and near (p = 0.03) conditions. The myopic group showed significant reductions in various aspects of retinal image quality compared with the emmetropes, involving components of the modulation transfer function, phase transfer function and point spread function, often along the vertical meridian of the eye. The depth of focus of the myopes (0.54 D) was larger (p = 0.02) than the emmetropes (0.42 D) and the distribution of refractive power (away from optimal sphero-cylinder) was greater in the myopic eyes (variance of distributions p < 0.05). We found evidence that the lead and lag of accommodation are influenced by the higher order aberrations of the eye (e.g. significant correlations between lead/lag and the peak of the visual Strehl ratio based on the MTF). This could indicate that the higher accommodation lags seen in myopes are providing optimized retinal image characteristics. The interaction between low and high order aberrations of the eye play a significant role in reducing the retinal image quality of myopic eyes compared with emmetropes. PMID:15913701

  12. Worldwide Status of Fresh Fruits Irradiation and Concerns about Quality, Safety, and Consumer Acceptance.

    PubMed

    Shahbaz, Hafiz Muhammad; Akram, Kashif; Ahn, Jae-Jun; Kwon, Joong-Ho

    2016-08-17

    Development of knowledge-based food preservation techniques have been a major focus of researchers in providing safe and nutritious food. Food irradiation is one of the most thoroughly investigated food preservation techniques, which has been shown to be effective and safe through extensive research. This process involves exposing food to ionizing radiations in order to destroy microorganisms or insects that might be present on and/or in the food. In addition, the effects of irradiation on the enzymatic activity and improvement of functional properties in food have also been well established. In the present review, the potential of food irradiation technology to address major problems, such as short shelf life, high-initial microbial loads, insect pest management (quarantine treatment) in supply chain, and safe consumption of fresh fruits was described. With improved hygienic quality, other uses, such as delayed ripening and enhanced physical appearance by irradiation were also discussed. Available data showed that the irradiation of fruits at the optimum dose can be a safe and cost-effective method, resulting in enhanced shelf life and hygienic quality with the least amount of compromise on the various nutritional attributes, whereas the consumer acceptance of irradiated fruits is a matter of providing the proper scientific information. PMID:25830470

  13. Towards real-time image quality assessment

    NASA Astrophysics Data System (ADS)

    Geary, Bobby; Grecos, Christos

    2011-03-01

    We introduce a real-time implementation and evaluation of a new fast accurate full reference based image quality metric. The popular general image quality metric known as the Structural Similarity Index Metric (SSIM) has been shown to be an effective, efficient and useful, finding many practical and theoretical applications. Recently the authors have proposed an enhanced version of the SSIM algorithm known as the Rotated Gaussian Discrimination Metric (RGDM). This approach uses a Gaussian-like discrimination function to evaluate local contrast and luminance. RGDM was inspired by an exploration of local statistical parameter variations in relation to variation of Mean Opinion Score (MOS) for a range of particular distortion types. In this paper we out-line the salient features of the derivation of RGDM and show how analyses of local statistics of distortion type necessitate variation in discrimination function width. Results on the LIVE image database show tight banding of RGDM metric value when plotted against mean opinion score indicating the usefulness of this metric. We then explore a number of strategies for algorithmic speed-up including the application of Integral Images for patch based computation optimisation, cost reduction for the evaluation of the discrimination function and general loop unrolling. We also employ fast Single Instruction Multiple Data (SIMD) intrinsics and explore data parallel decomposition on a multi-core Intel Processor.

  14. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  15. Temporal subtraction in chest radiography: Mutual information as a measure of image quality

    SciTech Connect

    Armato, Samuel G. III; Sensakovic, William F.; Passen, Samantha J.; Engelmann, Roger; MacMahon, Heber

    2009-12-15

    Purpose: Temporal subtraction is used to detect the interval change in chest radiographs and aid radiologists in patient diagnosis. This method registers two temporally different images by geometrically warping the lung region, or ''lung mask,'' of a previous radiographic image to align with the current image. The gray levels of every pixel in the current image are subtracted from the gray levels of the corresponding pixels in the warped previous image to form a temporal subtraction image. While temporal subtraction images effectively enhance areas of pathologic change, misregistration of the images can mislead radiologists by obscuring the interval change or by creating artifacts that mimic change. The purpose of this study was to investigate the utility of mutual information computed between two registered radiographic chest images as a metric for distinguishing between clinically acceptable and clinically unacceptable temporal subtraction images.Methods: A radiologist subjectively rated the image quality of 138 temporal subtraction images using a 1 (poor) to 5 (excellent) scale. To objectively assess the registration accuracy depicted in the temporal subtraction images, which is the main factor that affects the quality of these images, mutual information was computed on the two constituent registered images prior to their subtraction to generate a temporal subtraction image. Mutual information measures the joint entropy of the current image and the warped previous image, yielding a higher value when the gray levels of spatially matched pixels in each image are consistent. Mutual information values were correlated with the radiologist's subjective ratings. To improve this correlation, mutual information was computed from a spatially limited lung mask, which was cropped from the bottom by 10%-60%. Additionally, the number of gray-level values used in the joint entropy histogram was varied. The ability of mutual information to predict the clinical acceptability of

  16. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  17. Virtual glaucoma clinics: patient acceptance and quality of patient education compared to standard clinics

    PubMed Central

    Court, Jennifer H; Austin, Michael W

    2015-01-01

    Purpose Virtual glaucoma clinics allow rapid, reliable patient assessment but the service should be acceptable to patients and concordance with treatment needs to be maintained with adequate patient education. This study compares experiences and understanding of patients reviewed via the virtual clinic versus the standard clinic by way of an extended patient satisfaction questionnaire (PSQ). Patients and methods One hundred PSQs were given to consecutive patients attending glaucoma clinics in October 2013. All 135 patients reviewed via the virtual clinic from April 2013 until August 2013 were sent postal PSQs in September 2013. Data were obtained for demographics, understanding of glaucoma, their condition, satisfaction with their experience, and quality of information. Responses were analyzed in conjunction with the clinical records. Results Eighty-five percent of clinic patients and 63% of virtual clinic patients responded to the PSQ. The mean satisfaction score was over 4.3/5 in all areas surveyed. Virtual clinic patients’ understanding of their condition was very good, with 95% correctly identifying their diagnosis as glaucoma, 83% as ocular hypertension and 78% as suspects. There was no evidence to support inferior knowledge or self-perceived understanding compared to standard clinic patients. Follow-up patients knew more about glaucoma than new patients. Over 95% of patients found our information leaflet useful. Forty percent of patients sought additional information but less than 20% used the internet for this. Conclusion A substantial proportion of glaucoma pathway patients may be seen by non-medical staff supervised by glaucoma specialists via virtual clinics. Patients are accepting of this format, reporting high levels of satisfaction and non-inferior knowledge to those seen in standard clinics. PMID:25987832

  18. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  19. Effect of sorghum flour addition on in vitro starch digestibility, cooking quality, and consumer acceptability of durum wheat pasta.

    PubMed

    Khan, Imran; Yousif, Adel M; Johnson, Stuart K; Gamlath, Shirani

    2014-08-01

    Whole grain sorghum is a valuable source of resistant starch and polyphenolic antioxidants and its addition into staple food like pasta may reduce the starch digestibility. However, incorporating nondurum wheat materials into pasta provides a challenge in terms of maintaining cooking quality and consumer acceptability. Pasta was prepared from 100% durum wheat semolina (DWS) as control or by replacing DWS with either wholegrain red sorghum flour (RSF) or white sorghum flour (WSF) each at 20%, 30%, and 40% incorporation levels, following a laboratory-scale procedure. Pasta samples were evaluated for proximate composition, in vitro starch digestibility, cooking quality, and consumer acceptability. The addition of both RSF and WSF lowered the extent of in vitro starch digestion at all substitution levels compared to the control pasta. The rapidly digestible starch was lowered in all the sorghum-containing pastas compared to the control pasta. Neither RSF or WSF addition affected the pasta quality attributes (water absorption, swelling index, dry matter, adhesiveness, cohesiveness, and springiness), except color and hardness which were negatively affected. Consumer sensory results indicated that pasta samples containing 20% and 30% RSF or WSF had acceptable palatability based on meeting one or both of the preset acceptability criteria. It is concluded that the addition of wholegrain sorghum flour to pasta at 30% incorporation level is possible to reduce starch digestibility, while maintaining adequate cooking quality and consumer acceptability. PMID:25047068

  20. The mobile image quality survey game

    NASA Astrophysics Data System (ADS)

    Rasmussen, D. René

    2012-01-01

    In this paper we discuss human assessment of the quality of photographic still images, that are degraded in various manners relative to an original, for example due to compression or noise. In particular, we examine and present results from a technique where observers view images on a mobile device, perform pairwise comparisons, identify defects in the images, and interact with the display to indicate the location of the defects. The technique measures the response time and accuracy of the responses. By posing the survey in a form similar to a game, providing performance feedback to the observer, the technique attempts to increase the engagement of the observers, and to avoid exhausting observers, a factor that is often a problem for subjective surveys. The results are compared with the known physical magnitudes of the defects and with results from similar web-based surveys. The strengths and weaknesses of the technique are discussed. Possible extensions of the technique to video quality assessment are also discussed.

  1. The Cucurbitaceae of India: Accepted names, synonyms, geographic distribution, and information on images and DNA sequences

    PubMed Central

    Renner, Susanne S.; Pandey, Arun K.

    2013-01-01

    Abstract The most recent critical checklists of the Cucurbitaceae of India are 30 years old. Since then, botanical exploration, online availability of specimen images and taxonomic literature, and molecular-phylogenetic studies have led to modified taxon boundaries and geographic ranges. We present a checklist of the Cucurbitaceae of India that treats 400 relevant names and provides information on the collecting locations and herbaria for all types. We accept 94 species (10 of them endemic) in 31 genera. For accepted species, we provide their geographic distribution inside and outside India, links to online images of herbarium or living specimens, and information on publicly available DNA sequences to highlight gaps in the current understanding of Indian cucurbit diversity. Of the 94 species, 79% have DNA sequences in GenBank, albeit rarely from Indian material. The most species-rich genera are Trichosanthes with 22 species, Cucumis with 11 (all but two wild), Momordica with 8, and Zehneria with 5. From an evolutionary point of view, India is of special interest because it harbors a wide range of lineages, many of them relatively old and phylogenetically isolated. Phytogeographically, the north eastern and peninsular regions are richest in species, while the Jammu Kashmir and Himachal regions have few Cucurbitaceae. Our checklist probably underestimates the true diversity of Indian Cucurbitaceae, but should help focus efforts towards the least known species and regions. PMID:23717193

  2. Hyperspectral and multispectral imaging for evaluating food safety and quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spectral imaging technologies have been developed rapidly during the past decade. This paper presents hyperspectral and multispectral imaging technologies in the area of food safety and quality evaluation, with an introduction, demonstration, and summarization of the spectral imaging techniques avai...

  3. Quality assessment and consumer acceptability of bread from wheat and fermented banana flour.

    PubMed

    Adebayo-Oyetoro, Abiodun Omowonuola; Ogundipe, Oladeinde Olatunde; Adeeko, Kehinde Nojeemdeen

    2016-05-01

    Bread was produced from wheat flour and fermented unripe banana using the straight dough method. Matured unripe banana was peeled, sliced, steam blanched, dried and milled, and sieved to obtain flour. The flour was mixed with water and made into slurry and allowed to stand for 24 h after which it was divided into several portions and blended with wheat flour in different ratios. Proximate and mineral compositions as well as functional, pasting, and sensory characteristics of the samples were determined. The results of proximate analysis showed that crude fiber ranged between 1.95% and 3.19%, carbohydrate was between 49.70% and 52.98% and protein was 6.92% and 10.25%, respectively, while iron was between 27.07 mg/100 g and 29.30 mg/100 g. Swelling capacity of the experimental samples showed a significant difference from that of control. Peak viscosity ranged between 97.00RVU and 153.63RVU for experimental samples compared with 392.35RVU obtained for the control. Most of the sensory properties for the experimental samples were significantly different from the control. This study showed that bread with better quality and acceptability can be produced from wheat-unripe banana blends. PMID:27247766

  4. On pictures and stuff: image quality and material appearance

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2014-02-01

    Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.

  5. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  6. Stereoscopic image quality assessment using disparity-compensated view filtering

    NASA Astrophysics Data System (ADS)

    Song, Yang; Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2016-03-01

    Stereoscopic image quality assessment (IQA) plays a vital role in stereoscopic image/video processing systems. We propose a new quality assessment for stereoscopic image that uses disparity-compensated view filtering (DCVF). First, because a stereoscopic image is composed of different frequency components, DCVF is designed to decompose it into high-pass and low-pass components. Then, the qualities of different frequency components are acquired according to their phase congruency and coefficient distribution characteristics. Finally, support vector regression is utilized to establish a mapping model between the component qualities and subjective qualities, and stereoscopic image quality is calculated using this mapping model. Experiments on the LIVE 3-D IQA database and NBU 3-D IQA databases demonstrate that the proposed method can evaluate stereoscopic image quality accurately. Compared with several state-of-the-art quality assessment methods, the proposed method is more consistent with human perception.

  7. Finger vein image quality evaluation using support vector machines

    NASA Astrophysics Data System (ADS)

    Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2013-02-01

    In an automatic finger-vein recognition system, finger-vein image quality is significant for segmentation, enhancement, and matching processes. In this paper, we propose a finger-vein image quality evaluation method using support vector machines (SVMs). We extract three features including the gradient, image contrast, and information capacity from the input image. An SVM model is built on the training images with annotated quality labels (i.e., high/low) and then applied to unseen images for quality evaluation. To resolve the class-imbalance problem in the training data, we perform oversampling for the minority class with random-synthetic minority oversampling technique. Cross-validation is also employed to verify the reliability and stability of the learned model. Our experimental results show the effectiveness of our method in evaluating the quality of finger-vein images, and by discarding low-quality images detected by our method, the overall finger-vein recognition performance is considerably improved.

  8. Image quality metrics for optical coherence angiography.

    PubMed

    Lozzi, Andrea; Agrawal, Anant; Boretsky, Adam; Welle, Cristin G; Hammer, Daniel X

    2015-07-01

    We characterized image quality in optical coherence angiography (OCA) en face planes of mouse cortical capillary network in terms of signal-to-noise ratio (SNR) and Weber contrast (Wc) through a novel mask-based segmentation method. The method was used to compare two adjacent B-scan processing algorithms, (1) average absolute difference (AAD) and (2) standard deviation (SD), while varying the number of lateral cross-sections acquired (also known as the gate length, N). AAD and SD are identical at N = 2 and exhibited similar image quality for N<10. However, AAD is relatively less susceptible to bulk tissue motion artifact than SD. SNR and Wc were 15% and 35% higher for AAD from N = 25 to 100. In addition data sets were acquired with two objective lenses with different magnifications to quantify the effect of lateral resolution on fine capillary detection. The lower power objective yielded a significant mean broadening of 17% in Full Width Half Maximum (FWHM) diameter. These results may guide study and device designs for OCA capillary and blood flow quantification. PMID:26203372

  9. Optimizing Ready-to-Use Therapeutic Foods for Protein Quality, Cost, and Acceptability.

    PubMed

    Weber, Jacklyn; Callaghan, Meghan

    2016-03-01

    This article describes current research on the development of alternative ready-to-use therapeutic foods (RUTFs) in the treatment of severe acute malnutrition. An innovative and versatile linear programming tool has been developed to facilitate the creation of therapeutic formulas that are determined acceptable on multiple levels: costs, ingredient acceptability, availability and stability, nutrient requirements, and personal preferences. The formulas are analyzed for ease of production by Washington University team members and for organoleptic properties acceptability to target populations. In the future, RUTF products that are cost-effective, acceptable, sustainable, and widely available will become a reality. PMID:26864957

  10. 40 CFR 91.608 - Compliance with acceptable quality level and passing and failing criteria for selective...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Compliance with acceptable quality level and passing and failing criteria for selective enforcement audits. 91.608 Section 91.608 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION...

  11. 40 CFR 90.510 - Compliance with acceptable quality level and passing and failing criteria for selective...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Compliance with acceptable quality level and passing and failing criteria for selective enforcement audits. 90.510 Section 90.510 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT...

  12. 40 CFR 89.510 - Compliance with acceptable quality level and passing and failing criteria for selective...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Compliance with acceptable quality level and passing and failing criteria for selective enforcement audits. 89.510 Section 89.510 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE...

  13. The Role of Peer Influence and Perceived Quality of Teaching in Faculty Acceptance of Web-Based Learning Management Systems

    ERIC Educational Resources Information Center

    Salajan, Florin D.; Welch, Anita G.; Ray, Chris M.; Peterson, Claudette

    2015-01-01

    This study's primary investigation is the impact of "peer influence" and "perceived quality of teaching" on faculty members' usage of web-based learning management systems within the Technology Acceptance Model (TAM) framework. These factors are entered into an extended TAM as external variables impacting on the core constructs…

  14. Data acceptance for automated leukocyte tracking through segmentation of spatiotemporal images.

    PubMed

    Ray, Nilanjan; Acton, Scott T

    2005-10-01

    A crucial task in inflammation research and inflammatory drug validation is leukocyte velocity data collection from microscopic video imagery. Since manual methods are bias-prone and extremely time consuming, automated tracking methods are required to compute cell velocities. However, an automated tracking method is of little practical use unless it is accompanied by a mechanism to validate the tracker output. In this paper, we propose a validation technique that accepts or rejects the output of automated tracking methods. The proposed method first generates a spatiotemporal image from the cell locations given by a tracking method; then, it segments the spatiotemporal image to detect the presence or absence of a leukocyte. For segmenting the spatiotemporal images, we employ an edge-direction sensitive nonlinear filter followed by an active contour based technique. The proposed nonlinear filter, the maximum absolute average directional derivative (MAADD), first computes the magnitude of the mean directional derivative over an oriented line segment and then chooses the maximum of all such values within a range of orientations of the line segment. The proposed active contour segmentation is obtained via growing contours controlled by a two-dimensional force field, which is constructed by imposing a Dirichlet boundary condition on the gradient vector flow (GVF) field equations. The performance of the proposed validation method is reported here for the outputs of three different tracking techniques: the method was successful in 97% of the trials using manual tracking, in 94% using correlation tracking and in 93% using active contour tracking. PMID:16235656

  15. Quality Control of Diffusion Weighted Images

    PubMed Central

    Liu, Zhexing; Wang, Yi; Gerig, Guido; Gouttard, Sylvain; Tao, Ran; Fletcher, Thomas; Styner, Martin

    2013-01-01

    Diffusion Tensor Imaging (DTI) has become an important MRI procedure to investigate the integrity of white matter in brain in vivo. DTI is estimated from a series of acquired Diffusion Weighted Imaging (DWI) volumes. DWI data suffers from inherent low SNR, overall long scanning time of multiple directional encoding with correspondingly large risk to encounter several kinds of artifacts. These artifacts can be too severe for a correct and stable estimation of the diffusion tensor. Thus, a quality control (QC) procedure is absolutely necessary for DTI studies. Currently, routine DTI QC procedures are conducted manually by visually checking the DWI data set in a gradient by gradient and slice by slice way. The results often suffer from low consistence across different data sets, lack of agreement of different experts, and difficulty to judge motion artifacts by qualitative inspection. Additionally considerable manpower is needed for this step due to the large number of images to QC, which is common for group comparison and longitudinal studies, especially with increasing number of diffusion gradient directions. We present a framework for automatic DWI QC. We developed a tool called DTIPrep which pipelines the QC steps with a detailed protocoling and reporting facility. And it is fully open source. This framework/tool has been successfully applied to several DTI studies with several hundred DWIs in our lab as well as collaborating labs in Utah and Iowa. In our studies, the tool provides a crucial piece for robust DTI analysis in brain white matter study. PMID:24353379

  16. Image Quality Characteristics of Handheld Display Devices for Medical Imaging

    PubMed Central

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2×10−5 mm2 at 1 mm−1, while handheld displays have values lower than 3.7×10−6 mm2. Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  17. On the Subjective Acceptance during Cardiovascular Magnetic Resonance Imaging at 7.0 Tesla

    PubMed Central

    Klix, Sabrina; Els, Antje; Paul, Katharina; Graessl, Andreas; Oezerdem, Celal; Weinberger, Oliver; Winter, Lukas; Thalhammer, Christof; Huelnhagen, Till; Rieger, Jan; Mehling, Heidrun; Schulz-Menger, Jeanette; Niendorf, Thoralf

    2015-01-01

    Purpose This study examines the subjective acceptance during UHF-CMR in a cohort of healthy volunteers who underwent a cardiac MR examination at 7.0T. Methods Within a period of two-and-a-half years (January 2012 to June 2014) a total of 165 healthy volunteers (41 female, 124 male) without any known history of cardiac disease underwent UHF-CMR. For the assessment of the subjective acceptance a questionnaire was used to examine the participants experience prior, during and after the UHF-CMR examination. For this purpose, subjects were asked to respond to the questionnaire in an exit interview held immediately after the completion of the UHF-CMR examination under supervision of a study nurse to ensure accurate understanding of the questions. All questions were answered with “yes” or “no” including space for additional comments. Results Transient muscular contraction was documented in 12.7% of the questionnaires. Muscular contraction was reported to occur only during periods of scanning with the magnetic field gradients being rapidly switched. Dizziness during the study was reported by 12.7% of the subjects. Taste of metal was reported by 10.1% of the study population. Light flashes were reported by 3.6% of the entire cohort. 13% of the subjects reported side effects/observations which were not explicitly listed in the questionnaire but covered by the question about other side effects. No severe side effects as vomiting or syncope after scanning occurred. No increase in heart rate was observed during the UHF-CMR exam versus the baseline clinical examination. Conclusions This study adds to the literature by detailing the subjective acceptance of cardiovascular magnetic resonance imaging examinations at a magnetic field strength of 7.0T. Cardiac MR examinations at 7.0T are well tolerated by healthy subjects. Broader observational and multi-center studies including patient cohorts with cardiac diseases are required to gain further insights into the subjective

  18. The influence of statistical variations on image quality

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror; Hertel, Dirk; Bullitt, Julian

    2006-01-01

    For more than thirty years imaging scientists have constructed metrics to predict psychovisually perceived image quality. Such metrics are based on a set of objectively measurable basis functions such as Noise Power Spectrum (NPS), Modulation Transfer Function (MTF), and characteristic curves of tone and color reproduction. Although these basis functions constitute a set of primitives that fully describe an imaging system from the standpoint of information theory, we found that in practical imaging systems the basis functions themselves are determined by system-specific primitives, i.e. technology parameters. In the example of a printer, MTF and NPS are largely determined by dot structure. In addition MTF is determined by color registration, and NPS by streaking and banding. Since any given imaging system is only a single representation of a class of more or less identical systems, the family of imaging systems and the single system are not described by a unique set of image primitives. For an image produced by a given imaging system, the set of image primitives describing that particular image will be a singular instantiation of the underlying statistical distribution of that primitive. If we know precisely the set of imaging primitives that describe the given image we should be able to predict its image quality. Since only the distributions are known, we can only predict the distribution in image quality for a given image as produced by the larger class of 'identical systems'. We will demonstrate the combinatorial effect of the underlying statistical variations in the image primitives on the objectively measured image quality of a population of printers as well as on the perceived image quality of a set of test images. We also will discuss the choice of test image sets and impact of scene content on the distribution of perceived image quality.

  19. Three factors that influence the overall quality of the stereoscopic 3D content: image quality, comfort, and realism

    NASA Astrophysics Data System (ADS)

    Vlad, Raluca; Ladret, Patricia; Guérin, Anne

    2013-01-01

    In today's context, where 3D content is more abundant than ever and its acceptance by the public is probably de_nitive, there are many discussions on controlling and improving the 3D quality. But what does this notion represent precisely? How can it be formalized and standardized? How can it be correctly evaluated? A great number of studies have investigated these matters and many interesting approaches have been proposed. Despite this, no universal 3D quality model has been accepted so far that would allow a uniform across studies assessment of the overall quality of 3D content, as it is perceived by the human observers. In this paper, we are making a step forward in the development of a 3D quality model, by presenting the results of an exploratory study in which we started from the premise that the overall 3D perceived quality is a multidimensional concept that can be explained by the physical characteristics of the 3D content. We investigated the spontaneous impressions of the participants while watching varied 3D content, we analyzed the key notions that appeared in their discourse and identi_ed correlations between their judgments and the characteristics of our database. The test proved to be rich in results. Among its conclusions, we consider of highest importance the fact that we could thus determine three di_erent perceptual attributes ( image quality, comfort and realism ( that could constitute a _rst simplistic model for assessing the perceived 3D quality.

  20. Gap Acceptance During Lane Changes by Large-Truck Drivers—An Image-Based Analysis

    PubMed Central

    Nobukawa, Kazutoshi; Bao, Shan; LeBlanc, David J.; Zhao, Ding; Peng, Huei; Pan, Christopher S.

    2016-01-01

    This paper presents an analysis of rearward gap acceptance characteristics of drivers of large trucks in highway lane change scenarios. The range between the vehicles was inferred from camera images using the estimated lane width obtained from the lane tracking camera as the reference. Six-hundred lane change events were acquired from a large-scale naturalistic driving data set. The kinematic variables from the image-based gap analysis were filtered by the weighted linear least squares in order to extrapolate them at the lane change time. In addition, the time-to-collision and required deceleration were computed, and potential safety threshold values are provided. The resulting range and range rate distributions showed directional discrepancies, i.e., in left lane changes, large trucks are often slower than other vehicles in the target lane, whereas they are usually faster in right lane changes. Video observations have confirmed that major motivations for changing lanes are different depending on the direction of move, i.e., moving to the left (faster) lane occurs due to a slower vehicle ahead or a merging vehicle on the right-hand side, whereas right lane changes are frequently made to return to the original lane after passing. PMID:26924947

  1. Using short-wave infrared imaging for fruit quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    Quality evaluation of agricultural and food products is important for processing, inventory control, and marketing. Fruit size and surface quality are two important quality factors for high-quality fruit such as Medjool dates. Fruit size is usually measured by length that can be done easily by simple image processing techniques. Surface quality evaluation on the other hand requires more complicated design, both in image acquisition and image processing. Skin delamination is considered a major factor that affects fruit quality and its value. This paper presents an efficient histogram analysis and image processing technique that is designed specifically for real-time surface quality evaluation of Medjool dates. This approach, based on short-wave infrared imaging, provides excellent image contrast between the fruit surface and delaminated skin, which allows significant simplification of image processing algorithm and reduction of computational power requirements. The proposed quality grading method requires very simple training procedure to obtain a gray scale image histogram for each quality level. Using histogram comparison, each date is assigned to one of the four quality levels and an optimal threshold is calculated for segmenting skin delamination areas from the fruit surface. The percentage of the fruit surface that has skin delamination can then be calculated for quality evaluation. This method has been implemented and used for commercial production and proven to be efficient and accurate.

  2. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    PubMed

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  3. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  4. An image-based technique to assess the perceptual quality of clinical chest radiographs

    SciTech Connect

    Lin Yuan; Luo Hui; Dobbins, James T. III; Page McAdams, H.; Wang, Xiaohui; Sehnert, William J.; Barski, Lori; Foos, David H.; Samei, Ehsan

    2012-11-15

    Purpose: Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of factors such as the capture system modulation transfer function, noise power spectrum, detective quantum efficiency, and the exposure technique. While these elements form the basic underlying components of image quality, when assessing a clinical image, radiologists seldom refer to these factors, but rather examine several specific regions of the displayed patient images, further impacted by a particular image processing method applied, to see whether the image is suitable for diagnosis. In this paper, the authors developed a novel strategy to simulate radiologists' perceptual evaluation process on actual clinical chest images. Methods: Ten regional based perceptual attributes of chest radiographs were determined through an observer study. Those included lung grey level, lung detail, lung noise, rib-lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. Each attribute was characterized in terms of a physical quantity measured from the image algorithmically using an automated process. A pilot observer study was performed on 333 digital chest radiographs, which included 179 PA images with 10:1 ratio grids (set 1) and 154 AP images without grids (set 2), to ascertain the correlation between image perceptual attributes and physical quantitative measurements. To determine the acceptable range of each perceptual attribute, a preliminary quality consistency range was defined based on the preferred 80% of images in set 1. Mean value difference ({mu}{sub 1}-{mu}{sub 2}) and variance ratio ({sigma}{sub 1}{sup 2}/{sigma}{sub 2}{sup 2}) were investigated to further quantify the differences between the selected two image sets. Results: The pilot observer study demonstrated that our regional based physical quantity metrics of chest radiographs correlated very well with

  5. Learning to rank for blind image quality assessment.

    PubMed

    Gao, Fei; Tao, Dacheng; Gao, Xinbo; Li, Xuelong

    2015-10-01

    Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. PMID:25616080

  6. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  7. Retinal Image Quality during Accommodation in Adult Myopic Eyes

    PubMed Central

    Sreenivasan, Vidhyapriya; Aslakson, Emily; Kornaus, Andrew; Thibos, Larry N.

    2014-01-01

    Purpose Reduced retinal image contrast produced by accommodative lag is implicated with myopia development. Here, we measure accommodative error and retinal image quality from wavefront aberrations in myopes and emmetropes when they perform visually demanding and naturalistic tasks. Methods Wavefront aberrations were measured in 10 emmetropic and 11 myopic adults at three distances (100, 40, and 20 cm) while performing four tasks (monocular acuity, binocular acuity, reading, and movie watching). For the acuity tasks, measurements of wavefront error were obtained near the end point of the acuity experiment. Refractive state was defined as the target vergence that optimizes image quality using a visual contrast metric (VSMTF) computed from wavefront errors. Results Accommodation was most accurate (and image quality best) during binocular acuity whereas accommodation was least accurate (and image quality worst) while watching a movie. When viewing distance was reduced, accommodative lag increased and image quality (as quantified by VSMTF) declined for all tasks in both refractive groups. For any given viewing distance, computed image quality was consistently worse in myopes than in emmetropes, more so for the acuity than for reading/movie watching. Although myopes showed greater lags and worse image quality for the acuity experiments compared to emmetropes, acuity was not measurably worse in myopes compared to emmetropes. Conclusions Retinal image quality present when performing a visually demanding task (e.g., during clinical examination) is likely to be greater than for less demanding tasks (e.g., reading/movie watching). Although reductions in image quality lead to reductions in acuity, the image quality metric VSMTF is not necessarily an absolute indicator of visual performance because myopes achieved slightly better acuity than emmetropes despite showing greater lags and worse image quality. Reduced visual contrast in myopes compared to emmetropes is consistent

  8. Perceptual Quality Assessment for Multi-Exposure Image Fusion.

    PubMed

    Ma, Kede; Zeng, Kai; Wang, Zhou

    2015-11-01

    Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency. Our experimental results on the subjective database show that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion. Finally, we demonstrate the potential application of the proposed model by automatically tuning the parameters of MEF algorithms. PMID:26068317

  9. Maternal diet during early childhood, but not pregnancy, predicts diet quality and fruit and vegetable acceptance in offspring.

    PubMed

    Ashman, Amy M; Collins, Clare E; Hure, Alexis J; Jensen, Megan; Oldmeadow, Christopher

    2016-07-01

    Studies have identified prenatal flavour exposure as a determinant of taste preferences in infants; however, these studies have focused on relatively small samples and limited flavours. As many parents struggle with getting children to accept a variety of nutritious foods, a study of the factors influencing food acceptance is warranted. The objective of this study was to determine whether exposure to a wider variety of fruit and vegetables and overall higher diet quality in utero results in acceptance of a greater variety of these foods and better diet quality for offspring during childhood. This study is a secondary data analysis of pregnant women (n = 52) and their resulting offspring recruited for the Women and Their Children's Health study in NSW, Australia. Dietary intake of mothers and children was measured using food frequency questionnaires. Diet quality and vegetable and fruit variety were calculated using the Australian Recommended Food Score and the Australian Child and Adolescent Recommended Food Score. Associations between maternal and child diet quality and variety were assessed using Pearson's correlations and the total effect of in utero maternal pregnancy diet on childhood diet was decomposed into direct and indirect effect using mediation analysis. Maternal pregnancy and post-natal diet were both correlated with child diet for overall diet quality and fruit and vegetable variety (P < 0.001). Mediation analyses showed that the indirect effect of maternal pregnancy diet on child diet was mediated through maternal post-natal diet, particularly for fruit (P = 0.045) and vegetables (P = 0.055). Nutrition intervention should therefore be aimed at improving diet quality and variety in mothers with young children, in order to subsequently improve eating habits of offspring. PMID:25294406

  10. Contrast sensitivity function calibration based on image quality prediction

    NASA Astrophysics Data System (ADS)

    Han, Yu; Cai, Yunze

    2014-11-01

    Contrast sensitivity functions (CSFs) describe visual stimuli based on their spatial frequency. However, CSF calibration is limited by the size of the sample collection and this remains an open issue. In this study, we propose an approach for calibrating CSFs that is based on the hypothesis that a precise CSF model can accurately predict image quality. Thus, CSF calibration is regarded as the inverse problem of image quality prediction according to our hypothesis. A CSF could be calibrated by optimizing the performance of a CSF-based image quality metric using a database containing images with known quality. Compared with the traditional method, this would reduce the work involved in sample collection dramatically. In the present study, we employed three image databases to optimize some existing CSF models. The experimental results showed that the performance of a three-parameter CSF model was better than that of other models. The results of this study may be helpful in CSF and image quality research.

  11. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  12. Synthesis and quality control of [(18) F]T807 for tau PET imaging.

    PubMed

    Holt, Daniel P; Ravert, Hayden T; Dannals, Robert F

    2016-08-01

    The detailed synthesis and quality control of [(18) F]T807, radiotracer for tau protein aggregate imaging, are described. The radiotracer synthesis was accomplished in an average of 48 min with an average specific activity at end-of-synthesis of over 4.4 TBq/µmole (120 Ci/µmole) and an average radiochemical yield of 32%. Compliance with all standard US Pharmacopeia Chapter <823> acceptance tests was observed. PMID:27427174

  13. Automated FMV image quality assessment based on power spectrum statistics

    NASA Astrophysics Data System (ADS)

    Kalukin, Andrew

    2015-05-01

    Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).

  14. The study of surgical image quality evaluation system by subjective quality factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard

    2016-03-01

    GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.

  15. Is image quality a function of contrast perception?

    NASA Astrophysics Data System (ADS)

    Haun, Andrew M.; Peli, Eli

    2013-03-01

    In this retrospective we trace in broad strokes the development of image quality measures based on the study of the early stages of the human visual system (HVS), where contrast encoding is fundamental. We find that while presenters at the Human Vision and Electronic Imaging meetings have frequently strived to find points of contact between the study of human contrast psychophysics and the development of computer vision and image quality algorithms. Progress has not always been made on these terms, although indirect impact of vision science on more recent image quality metrics can be observed.

  16. New image quality assessment method using wavelet leader pyramids

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolin; Yang, Xiaokang; Zheng, Shibao; Lin, Weiyao; Zhang, Rui; Zhai, Guangtao

    2011-06-01

    In this paper, we propose a wave leader pyramids based Visual Information Fidelity method for image quality assessment. Motivated by the observations that the human vision systems (HVS) are more sensitive to edge and contour regions and that the human visual sensitivity varies with spatial frequency, we first introduce the two-dimensional wavelet leader pyramids to robustly extract the multiscale information of edges. Based on the wavelet leader pyramids, we further propose a visual information fidelity metric to evaluate the quality of images by quantifying the information loss between the original and the distorted images. Experimental results show that our method outperforms many state-of-the-art image quality metrics.

  17. Image quality assessment for CT used on small animals

    NASA Astrophysics Data System (ADS)

    Cisneros, Isabela Paredes; Agulles-Pedrós, Luis

    2016-07-01

    Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters using an acrylic phantom and then, using the computational tool MatLab, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.

  18. Dynamic flat panel detector versus image intensifier in cardiac imaging: dose and image quality

    NASA Astrophysics Data System (ADS)

    Vano, E.; Geiger, B.; Schreiner, A.; Back, C.; Beissel, J.

    2005-12-01

    The practical aspects of the dosimetric and imaging performance of a digital x-ray system for cardiology procedures were evaluated. The system was configured with an image intensifier (II) and later upgraded to a dynamic flat panel detector (FD). Entrance surface air kerma (ESAK) to phantoms of 16, 20, 24 and 28 cm of polymethyl methacrylate (PMMA) and the image quality of a test object were measured. Images were evaluated directly on the monitor and with numerical methods (noise and signal-to-noise ratio). Information contained in the DICOM header for dosimetry audit purposes was also tested. ESAK values per frame (or kerma rate) for the most commonly used cine and fluoroscopy modes for different PMMA thicknesses and for field sizes of 17 and 23 cm for II, and 20 and 25 cm for FD, produced similar results in the evaluated system with both technologies, ranging between 19 and 589 µGy/frame (cine) and 5 and 95 mGy min-1 (fluoroscopy). Image quality for these dose settings was better for the FD version. The 'study dosimetric report' is comprehensive, and its numerical content is sufficiently accurate. There is potential in the future to set those systems with dynamic FD to lower doses than are possible in the current II versions, especially for digital cine runs, or to benefit from improved image quality.

  19. The effect of lag on image quality for a digital breast tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Mainprize, James G.; Wang, Xinying; Yaffe, Martin J.

    2009-02-01

    Digital breast tomosynthesis (DBT) is a limited-view, limited-angle computed tomography (CT) technique that has the potential to yield improved lesion conspicuity over that of standard digital mammography. To maintain short acquisition time, the detector must have a rapid temporal response. Transient effects like lag and ghosting have been noted previously in digital mammography systems, but for the times between successive views (approx. 1 minute), their impact on image quality is generally negligible. However, tomosynthesis imaging requires much shorter times between projection images (< 1 s). Under these conditions, detectors that may have been acceptable for digital mammography may not be suitable for tomosynthesis. Transient effects will generally cause both a loss of signal and an increase in image noise. A cascaded systems analysis is used to determine the effect of lag on image quality in a DBT system. It is shown that in the projection images, lag results in artifacts appearing as a "trail" of prior exposures. The effect of lag on image quality is also evaluated with a simple Monte Carlo simulation of a cone-beam tomosynthesis image formation incorporating a filtered back-projection algorithm.

  20. Improving the Quality of Imaging in the Emergency Department.

    PubMed

    Blackmore, C Craig; Castro, Alexandra

    2015-12-01

    Imaging is critical for the care of emergency department (ED) patients. However, much of the imaging performed for acute care today is overutilization, creating substantial cost without significant benefit. Further, the value of imaging is not easily defined, as imaging only affects outcomes indirectly, through interaction with treatment. Improving the quality, including appropriateness, of emergency imaging requires understanding of how imaging contributes to patient care. The six-tier efficacy hierarchy of Fryback and Thornbury enables understanding of the value of imaging on multiple levels, ranging from technical efficacy to medical decision-making and higher-level patient and societal outcomes. The imaging efficacy hierarchy also allows definition of imaging quality through the Institute of Medicine (IOM)'s quality domains of safety, effectiveness, patient-centeredness, timeliness, efficiency, and equitability and provides a foundation for quality improvement. In this article, the authors elucidate the Fryback and Thornbury framework to define the value of imaging in the ED and to relate emergency imaging to the IOM quality domains. PMID:26568040

  1. Quaternion structural similarity: a new quality index for color images.

    PubMed

    Kolaman, Amir; Yadid-Pecht, Orly

    2012-04-01

    One of the most important issues for researchers developing image processing algorithms is image quality. Methodical quality evaluation, by showing images to several human observers, is slow, expensive, and highly subjective. On the other hand, a visual quality matrix (VQM) is a fast, cheap, and objective tool for evaluating image quality. Although most VQMs are good in predicting the quality of an image degraded by a single degradation, they poorly perform for a combination of two degradations. An example for such degradation is the color crosstalk (CTK) effect, which introduces blur with desaturation. CTK is expected to become a bigger issue in image quality as the industry moves toward smaller sensors. In this paper, we will develop a VQM that will be able to better evaluate the quality of an image degraded by a combined blur/desaturation degradation and perform as well as other VQMs on single degradations such as blur, compression, and noise. We show why standard scalar techniques are insufficient to measure a combined blur/desaturation degradation and explain why a vectorial approach is better suited. We introduce quaternion image processing (QIP), which is a true vectorial approach and has many uses in the fields of physics and engineering. Our new VQM is a vectorial expansion of structure similarity using QIP, which gave it its name-Quaternion Structural SIMilarity (QSSIM). We built a new database of a combined blur/desaturation degradation and conducted a quality survey with human subjects. An extensive comparison between QSSIM and other VQMs on several image quality databases-including our new database-shows the superiority of this new approach in predicting visual quality of color images. PMID:22203713

  2. Measuring Performance Excellence: Key Performance Indicators for Institutions Accepted into the Academic Quality Improvement Program (AQIP)

    ERIC Educational Resources Information Center

    Ballard, Paul J.

    2013-01-01

    Given growing interest in accountability and outcomes, the North Central Association's Higher Learning Commission developed a new path for accreditation, the Academic Quality Improvement Program (AQIP). The goal is to infuse continuous improvement and quality in the culture of higher education, and to blend traditional accreditation with the…

  3. Proceedings and findings of the 1976 Workshop on Ride Quality. [passenger acceptance of transportation systems

    NASA Technical Reports Server (NTRS)

    Kuhlthau, A. R. (Editor)

    1976-01-01

    The workshop was organized around the study of the three basic transfer functions required to evaluate and/or predict passenger acceptance of transportation systems: These are the vehicle, passenger, and value transfer functions. For the purpose of establishing working groups corresponding to the basic transfer functions, it was decided to split the vehicle transfer function into two distinct groups studying surface vehicles and air/marine vehicles, respectively.

  4. Image quality and dose efficiency of high energy phase sensitive x-ray imaging: Phantom studies

    PubMed Central

    Wong, Molly Donovan; Wu, Xizeng; Liu, Hong

    2014-01-01

    The goal of this preliminary study was to perform an image quality comparison of high energy phase sensitive imaging with low energy conventional imaging at similar radiation doses. The comparison was performed with the following phantoms: American College of Radiology (ACR), contrast-detail (CD), acrylic edge and tissue-equivalent. Visual comparison of the phantom images indicated comparable or improved image quality for all phantoms. Quantitative comparisons were performed through ACR and CD observer studies, both of which indicated higher image quality in the high energy phase sensitive images. The results of this study demonstrate the ability of high energy phase sensitive imaging to overcome existing challenges with the clinical implementation of phase contrast imaging and improve the image quality for a similar radiation dose as compared to conventional imaging near typical mammography energies. In addition, the results illustrate the capability of phase sensitive imaging to sustain the image quality improvement at high x-ray energies and for – breast – simulating phantoms, both of which indicate the potential to benefit fields such as mammography. Future studies will continue to investigate the potential for dose reduction and image quality improvement provided by high energy phase sensitive contrast imaging. PMID:24865208

  5. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  6. A new assessment method for image fusion quality

    NASA Astrophysics Data System (ADS)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  7. Raman chemical imaging system for food safety and quality inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Raman chemical imaging technique combines Raman spectroscopy and digital imaging to visualize composition and structure of a target, and it offers great potential for food safety and quality research. In this study, a laboratory-based Raman chemical imaging platform was designed and developed. The i...

  8. Diagnostic image quality of hysterosalpingography: ionic versus non ionic water soluble iodinated contrast media

    PubMed Central

    Mohd Nor, H; Jayapragasam, KJ; Abdullah, BJJ

    2009-01-01

    Objective To compare the diagnostic image quality between three different water soluble iodinated contrast media in hysterosalpingography (HSG). Material and method In a prospective randomised study of 204 patients, the diagnostic quality of images obtained after hysterosalpingography were evaluated using Iopramide (106 patients) and Ioxaglate (98 patients). 114 patients who had undergone HSG examination using Iodamide were analysed retrospectively. Image quality was assessed by three radiologists independently based on an objective set of criteria. The obtained results were statistically analysed using Kruskal-Wallis and Mann-Whitney U test. Results Visualisation of fimbrial rugae was significantly better with Iopramide and Ioxaglate than Iodamide. All contrast media provided acceptable diagnostic image quality with regard to uterine, fallopian tubes outline and peritoneal spill. Uterine opacification was noted to be too dense in all three contrast media and not optimal for the assessment of intrauterine pathology. Higher incidence of contrast intravasation was noted in the Iodamide group. Similarly, the numbers of patients diagnosed with bilateral blocked fallopian tubes were also higher in the Iodamide group. Conclusion HSG using low osmolar contrast media (Iopramide and Ioxaglate) demonstrated diagnostic image qualities similar to HSG using conventional high osmolar contrast media (Iodamide). However, all three contrast media were found to be too dense for the detection of intrauterine pathology. Better visualisation of the fimbrial outline using Ioxaglate and Iopramide were attributed to their low contrast viscosity. The increased incidence of contrast media intravasation and bilateral tubal blockage using Iodamide are probably related to the high viscosity. PMID:21611058

  9. Study on the improvement of overall optical image quality via digital image processing

    NASA Astrophysics Data System (ADS)

    Tsai, Cheng-Mu; Fang, Yi Chin; Lin, Yu Chin

    2008-12-01

    This paper studies the effects of improving overall optical image quality via Digital Image Processing (DIP) and compares the promoted optical image with the non-processed optical image. Seen from the optical system, the improvement of image quality has a great influence on chromatic aberration and monochromatic aberration. However, overall image capture systems-such as cellphones and digital cameras-include not only the basic optical system but also many other factors, such as the electronic circuit system, transducer system, and so forth, whose quality can directly affect the image quality of the whole picture. Therefore, in this thesis Digital Image Processing technology is utilized to improve the overall image. It is shown via experiments that system modulation transfer function (MTF) based on the proposed DIP technology and applied to a comparatively bad optical system can be comparable to, even possibly superior to, the system MTF derived from a good optical system.

  10. Automated quality assessment in three-dimensional breast ultrasound images.

    PubMed

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects. PMID:27158633

  11. Comparative assessment of three image reconstruction techniques for image quality and radiation dose in patients undergoing abdominopelvic multidetector CT examinations

    PubMed Central

    Desai, G S; Thabet, A; Elias, A Y A; Sahani, D V

    2013-01-01

    Objective To compare image quality and radiation dose of abdominal CT examinations reconstructed with three image reconstruction techniques. Methods In this Institutional Review Board-approved study, contrast-enhanced (CE) abdominopelvic CT scans from 23 patients were reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASiR) and iterative reconstruction in image space (IRIS) and were reviewed by two blinded readers. Subjective (acceptability, sharpness, noise and artefacts) and objective (noise) measures of image quality were recorded for each image data set. Radiation doses in CT dose index (CTDI) dose–length product were also calculated for each examination type and compared. Imaging parameters were compared using the Wilcoxon signed rank test and a paired t-test. Results All 69 CECT examinations were of diagnostic quality and similar for overall acceptability (mean grade for ASiR, 3.9±0.3; p=0.2 for Readers 1 and 2; IRIS, 3.9±0.4, p=0.2; FBP, 3.8±0.9). Objective noise was considerably lower with both iterative techniques (p<0.0001 and 0.0016 for ASiR and IRIS). Recorded mean radiation dose, i.e. CTDIvol, was 24% and 10% less with ASiR (11.4±3.4 mGy; p<0.001) and IRIS (13.5±3.7 mGy; p=0.06), respectively, than with FBP: 15.0±3.5 mGy. Conclusion At the system parameters used in this study, abdominal CT scans reconstructed with ASiR and IRIS provide diagnostic images with reduced image noise and 10–24% lower radiation dose than FBP. Advances in knowledge CT images reconstructed with FBP are frequently noisy on lowering the radiation dose. Newer iterative reconstruction techniques have different approaches to produce images with less noise; ASiR and IRIS provide diagnostic abdominal CT images with reduced image noise and radiation dose compared with FBP. This has been documented in this study. PMID:23255538

  12. Retinal image quality assessment through a visual similarity index

    NASA Astrophysics Data System (ADS)

    Pérez, Jorge; Espinosa, Julián; Vázquez, Carmen; Mas, David

    2013-04-01

    Retinal image quality is commonly analyzed through parameters inherited from instrumental optics. These parameters are defined for 'good optics' so they are hard to translate into visual quality metrics. Instead of using point or artificial functions, we propose a quality index that takes into account properties of natural images. These images usually show strong local correlations that help to interpret the image. Our aim is to derive an objective index that quantifies the quality of vision by taking into account the local structure of the scene, instead of focusing on a particular aberration. As we show, this index highly correlates with visual acuity and allows inter-comparison of natural images around the retina. The usefulness of the index is proven through the analysis of real eyes before and after undergoing corneal surgery, which usually are hard to analyze with standard metrics.

  13. No-reference visual quality assessment for image inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  14. Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan

    2014-03-01

    We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.

  15. Improvement of image quality by polarization mixing

    NASA Astrophysics Data System (ADS)

    Kasahara, Ryosuke; Itoh, Izumi; Hirai, Hideaki

    2014-03-01

    Information about the polarization of light is valuable because it contains information about the light source illuminating an object, the illumination angle, and the object material. However, polarization information strongly depends on the direction of the light source, and it is difficult to use a polarization image with various recognition algorithms outdoors because the angle of the sun varies. We propose an image enhancement method for utilizing polarization information in many such situations where the light source is not fixed. We take two approaches to overcome this problem. First, we compute an image that is the combination of a polarization image and the corresponding brightness image. Because of the angle of the light source, the polarization contains no information about some scenes. Therefore, it is difficult to use only polarization information in any scene for applications such as object detection. However, if we use a combination of a polarization image and a brightness image, the brightness image can complement the lack of scene information. The second approach is finding features that depend less on the direction of the light source. We propose a method for extracting scene features based on a calculation of the reflection model including polarization effects. A polarization camera that has micro-polarizers on each pixel of the image sensor was built and used for capturing images. We discuss examples that demonstrate the improved visibility of objects by applying our proposed method to, e.g., the visibility of lane markers on wet roads.

  16. Service User- and Carer-Reported Measures of Involvement in Mental Health Care Planning: Methodological Quality and Acceptability to Users

    PubMed Central

    Gibbons, Chris J.; Bee, Penny E.; Walker, Lauren; Price, Owen; Lovell, Karina

    2014-01-01

    Background: Increasing service user and carer involvement in mental health care planning is a key healthcare priority but one that is difficult to achieve in practice. To better understand and measure user and carer involvement, it is crucial to have measurement questionnaires that are both psychometrically robust and acceptable to the end user. Methods: We conducted a systematic review using the terms “care plan$,” “mental health,” “user perspective$,” and “user participation” and their linguistic variants as search terms. Databases were searched from inception to November 2012, with an update search at the end of September 2014. We included any articles that described the development, validation or use of a user and/or carer-reported outcome measures of involvement in mental health care planning. We assessed the psychometric quality of each instrument using the “Evaluating the Measurement of Patient-Reported Outcomes” (EMPRO) criteria. Acceptability of each instrument was assessed using novel criteria developed in consultation with a mental health service user and carer consultation group. Results: We identified eleven papers describing the use, development, and/or validation of nine user/carer-reported outcome measures. Psychometric properties were sparsely reported and the questionnaires met few service user/carer-nominated attributes for acceptability. Where reported, basic psychometric statistics were of good quality, indicating that some measures may perform well if subjected to more rigorous psychometric tests. The majority were deemed to be too long for use in practice. Discussion: Multiple instruments are available to measure user/carer involvement in mental health care planning but are either of poor quality or poorly described. Existing measures cannot be considered psychometrically robust by modern standards, and cannot currently be recommended for use. Our review has identified an important knowledge gap, and an urgent need to

  17. Meat quality evaluation by hyperspectral imaging technique: an overview.

    PubMed

    Elmasry, Gamal; Barbin, Douglas F; Sun, Da-Wen; Allen, Paul

    2012-01-01

    During the last two decades, a number of methods have been developed to objectively measure meat quality attributes. Hyperspectral imaging technique as one of these methods has been regarded as a smart and promising analytical tool for analyses conducted in research and industries. Recently there has been a renewed interest in using hyperspectral imaging in quality evaluation of different food products. The main inducement for developing the hyperspectral imaging system is to integrate both spectroscopy and imaging techniques in one system to make direct identification of different components and their spatial distribution in the tested product. By combining spatial and spectral details together, hyperspectral imaging has proved to be a promising technology for objective meat quality evaluation. The literature presented in this paper clearly reveals that hyperspectral imaging approaches have a huge potential for gaining rapid information about the chemical structure and related physical properties of all types of meat. In addition to its ability for effectively quantifying and characterizing quality attributes of some important visual features of meat such as color, quality grade, marbling, maturity, and texture, it is able to measure multiple chemical constituents simultaneously without monotonous sample preparation. Although this technology has not yet been sufficiently exploited in meat process and quality assessment, its potential is promising. Developing a quality evaluation system based on hyperspectral imaging technology to assess the meat quality parameters and to ensure its authentication would bring economical benefits to the meat industry by increasing consumer confidence in the quality of the meat products. This paper provides a detailed overview of the recently developed approaches and latest research efforts exerted in hyperspectral imaging technology developed for evaluating the quality of different meat products and the possibility of its widespread

  18. Links Among Italian Preschoolers’ Socio-Emotional Competence, Teacher-Child Relationship Quality and Peer Acceptance

    PubMed Central

    Sette, Stefania; Spinrad, Tracy; Baumgartner, Emma

    2013-01-01

    The purpose of the present study was to examine the relations of teacher-child relationship quality (close, conflictive, and dependent), children’s social behavior, and peer likability in a sample of Italian preschool-aged children (46 boys; 42 girls). Preschool teachers evaluated the quality of the teacher-child relationship and children’s social behaviors (i.e., social competence, anger-aggression, and anxiety-withdrawal). Peer-rated likability was measured using a sociometric procedure. Results indicated that conflictual teacher-child relationships were related to high aggressive behavior, and dependent teacher-child relationships were positively associated with children’s anxiety-withdrawal. Moreover, we found an indirect association between close teacher-child relationship quality and peer likability through children’s social competence. The findings provide evidence that the teacher-child relationship is critical for children’s social behaviors, and that social competence was uniquely related to peer likability. PMID:24039375

  19. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  20. A comparison of defect size and film quality obtained from Film digitized image and digital image radiographs

    NASA Astrophysics Data System (ADS)

    Kamlangkeng, Poramate; Asa, Prateepasen; Mai, Noipitak

    2014-06-01

    Digital radiographic testing is an acceptable premature nondestructive examination technique. Its performance and limitation comparing to the old technique are still not widely well known. In this paper conducted the study on the comparison of the accuracy of the defect size measurement and film quality obtained from film and digital radiograph techniques by testing in specimens and known size sample defect. Initially, one specimen was built with three types of internal defect; which are longitudinal cracking, lack of fusion, and porosity. For the known size sample defect, it was machined various geometrical size for comparing the accuracy of the measuring defect size to the real size in both film and digital images. To compare the image quality by considering at smallest detectable wire and the three defect images. In this research used Image Quality Indicator (IQI) of wire type 10/16 FE EN BS EN-462-1-1994. The radiographic films were produced by X-ray and gamma ray using Kodak AA400 size 3.5x8 inches, while the digital images were produced by Fuji image plate type ST-VI with 100 micrometers resolution. During the tests, a radiator GE model MF3 was implemented. The applied energy is varied from 120 to 220 kV and the current from 1.2 to 3.0 mA. The intensity of Iridium 192 gamma ray is in the range of 24-25 Curie. Under the mentioned conditions, the results showed that the deviation of the defect size measurement comparing to the real size obtained from the digital image radiographs is below than that of the film digitized, whereas the quality of film digitizer radiographs is higher in comparison.

  1. Method and tool for generating and managing image quality allocations through the design and development process

    NASA Astrophysics Data System (ADS)

    Sparks, Andrew W.; Olson, Craig; Theisen, Michael J.; Addiego, Chris J.; Hutchins, Tiffany G.; Goodman, Timothy D.

    2016-05-01

    Performance models for infrared imaging systems require image quality parameters; optical design engineers need image quality design goals; systems engineers develop image quality allocations to test imaging systems against. It is a challenge to maintain consistency and traceability amongst the various expressions of image quality. We present a method and parametric tool for generating and managing expressions of image quality during the system modeling, requirements specification, design, and testing phases of an imaging system design and development project.

  2. Image quality assessment using Takagi-Sugeno-Kang fuzzy model

    NASA Astrophysics Data System (ADS)

    Äńordević, Dragana; Kukolj, Dragan; Schelkens, Peter

    2015-03-01

    The main aim of the paper is to present a non-linear image quality assessment model based on a fuzzy logic estimator, namely the Takagi-Sugeno-Kang fuzzy model. This image quality assessment model uses a clustered space of input objective metrics. Main advantages of the introduced quality model are simplicity and understandably of its fuzzy rules. As reference model the polynomial 3 rd order model was chosen. The parameters of the Takagi-Sugeno-Kang fuzzy model are optimized in accordance to the mapping criteria of the selected set of input objective quality measures to the Mean Opinion Score (MOS) scale.

  3. Maintaining Acceptable Indoor Air Quality during the Renovation of a School. Technical Bulletin.

    ERIC Educational Resources Information Center

    Jacobs, Bruce W.

    Information that school facility personnel can use concerning the potential impacts of renovation projects on indoor air quality (IAQ), along with details of some effective control strategies, are presented. Various kinds of contaminants may be generated by renovations, including volatile and semivolatile organic compounds, dusts and fibers (e.g.,…

  4. INFLUENCE OF FORAGE SPECIES ON PASTURE PERFORMANCE, CARCASS QUALITY, AND CONSUMER ACCEPTABILITY

    Technology Transfer Automated Retrieval System (TEKTRAN)

    British-type steers of predominantly Angus breeding were used to determine the influence of forage species fed during the final 30 to 45 days of finishing on performance, carcass characteristics, and meat quality. Finishing treatments included: 1) Mixed cool season pasture [bluegrass, orchardgrass,...

  5. The meaning of air quality and flue gas emission standards for public acceptance of new thermal power plants.

    PubMed

    Barbalić, N; Marijan, G; Marić, M

    2000-06-01

    For the time being only 30-40% of the electric energy supply in Croatia comes from burning fossil fuel. New capacities of 800-1400 MW for the next decade will have to rely on the exclusive use of fossil fuels in thermal power plants (TPP). Public opinion will probably have a decisive influence on the issuing of construction permissions. The potential adverse effects on air seem to be the main argument against construction of TPPs. The priority is therefore to unambiguously state what air quality is warranted in the influenced area for the whole operation period of a TPP. It is important that the public should understand the real meaning of current air quality standards and emission limits. The only known way to do it today is through comparison with the corresponding standards and limits accepted worldwide. This paper discusses some important aspects of such comparison. PMID:11103526

  6. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  7. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  8. Impact of image acquisition timing on image quality for dual energy contrast-enhanced breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Hill, Melissa L.; Mainprize, James G.; Puong, Sylvie; Carton, Ann-Katherine; Iordache, Razvan; Muller, Serge; Yaffe, Martin J.

    2012-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) image quality is affected by a large parameter space including the tomosynthesis acquisition geometry, imaging technique factors, the choice of reconstruction algorithm, and the subject breast characteristics. The influence of most of these factors on reconstructed image quality is well understood for DBT. However, due to the contrast agent uptake kinetics in CE imaging, the subject breast characteristics change over time, presenting a challenge for optimization . In this work we experimentally evaluate the sensitivity of the reconstructed image quality to timing of the low-energy and high-energy images and changes in iodine concentration during image acquisition. For four contrast uptake patterns, a variety of acquisition protocols were tested with different timing and geometry. The influence of the choice of reconstruction algorithm (SART or FBP) was also assessed. Image quality was evaluated in terms of the lesion signal-difference-to-noise ratio (LSDNR) in the central slice of DE CE-DBT reconstructions. Results suggest that for maximum image quality, the low- and high-energy image acquisitions should be made within one x-ray tube sweep, as separate low- and high-energy tube sweeps can degrade LSDNR. In terms of LSDNR per square-root dose, the image quality is nearly equal between SART reconstructions with 9 and 15 angular views, but using fewer angular views can result in a significant improvement in the quantitative accuracy of the reconstructions due to the shorter imaging time interval.

  9. Interplay between JPEG-2000 image coding and quality estimation

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2013-03-01

    Image quality and utility estimators aspire to quantify the perceptual resemblance and the usefulness of a distorted image when compared to a reference natural image, respectively. Image-coders, such as JPEG-2000, traditionally aspire to allocate the available bits to maximize the perceptual resemblance of the compressed image when compared to a reference uncompressed natural image. Specifically, this can be accomplished by allocating the available bits to minimize the overall distortion, as computed by a given quality estimator. This paper applies five image quality and utility estimators, SSIM, VIF, MSE, NICE and GMSE, within a JPEG-2000 encoder for rate-distortion optimization to obtain new insights on how to improve JPEG-2000 image coding for quality and utility applications, as well as to improve the understanding about the quality and utility estimators used in this work. This work develops a rate-allocation algorithm for arbitrary quality and utility estimators within the Post- Compression Rate-Distortion Optimization (PCRD-opt) framework in JPEG-2000 image coding. Performance of the JPEG-2000 image coder when used with a variety of utility and quality estimators is then assessed. The estimators fall into two broad classes, magnitude-dependent (MSE, GMSE and NICE) and magnitudeindependent (SSIM and VIF). They further differ on their use of the low-frequency image content in computing their estimates. The impact of these computational differences is analyzed across a range of images and bit rates. In general, performance of the JPEG-2000 coder below 1.6 bits/pixel with any of these estimators is highly content dependent, with the most relevant content being the amount of texture in an image and whether the strongest gradients in an image correspond to the main contours of the scene. Above 1.6 bits/pixel, all estimators produce visually equivalent images. As a result, the MSE estimator provides the most consistent performance across all images, while specific

  10. Image quality requirements for the digitization of photographic collections

    NASA Astrophysics Data System (ADS)

    Frey, Franziska S.; Suesstrunk, Sabine E.

    1996-02-01

    Managers of photographic collections in libraries and archives are exploring digital image database systems, but they usually have few sources of technical guidance and analysis available. Correctly digitizing photographs puts high demands on the imaging system and the human operators involved in the task. Pictures are very dense with information, requiring high-quality scanning procedures. In order to provide advice to libraries and archives seeking to digitize photographic collections, it is necessary to thoroughly understand the nature of the various originals and the purposes for digitization. Only with this understanding is it possible to choose adequate image quality for the digitization process. The higher the quality, the more expertise, time, and cost is likely to be involved in generating and delivering the image. Despite all the possibilities for endless copying, distributing, and manipulating of digital images, image quality choices made when the files are first created have the same 'finality' that they have in conventional photography. They will have a profound effect on project cost, the value of the final project to researchers, and the usefulness of the images as preservation surrogates. Image quality requirements therefore have to be established carefully before a digitization project starts.

  11. Evaluation of indoor air quality using the decibel concept. Part II-ventilation for acceptable indoor air quality.

    PubMed

    Jokl, M V

    1997-03-01

    Weber-Fechner's law concerning the perception of sound by man with time expressed as a logarithmic function can also be used for the odour constituent used in the evaluation of indoor air quality in buildings. A new unit dB (odour) based on the concentration of Total Volatile Organic Compounds (TVOC) is proposed as it is currently the basis for determining the air change rate. On the Psycho-Physical Scale according to Yaglou, the weakest odour that can be detected by the human smell sensors is equal to one and corresponds to the lower limit of percentage dissatisfaction (PD) of 5.8% and a threshold concentration (TVOC) of 50 micrograms/m3-0 dB (odour). The upper limit is determined by the initial value of toxicity TVOC -25,000 micrograms/m3-135 dB (odour). Optimal values corresponding to PD = 20% (according to EUR 14449 EN) and admissible values corresponding to PD = 30% (see Part I of this paper) are proposed, therefore the same values used to evaluate noise can be used to evaluate air quality and additionally the contribution of individual constituents (at present acoustic and odour) to the overall quality of the environment can be ascertained. PMID:9150998

  12. Digital Receptor Image Quality Evaluation: Effect of Different Filtration Schemes

    NASA Astrophysics Data System (ADS)

    Murphy, Simon; Christianson, Olav; Amurao, Maxwell; Samei, Ehsan

    2010-04-01

    The International Electrotechnical Commission provides a standard measurement methodology to provide performance intercomparison between imaging systems. Its formalism specifies beam quality based on half value layer attained by target kVp and additional Al filtration. Similar beam quality may be attained more conveniently using a filtration combination of Cu and Al. This study aimed to compare the two filtration schemes by their effects on image quality in terms of signal-difference-to-noise ratio, spatial resolution, exposure index, noise power spectrum, modulation transfer function, and detective quantum efficiency. A comparative assessment of the images was performed by analyzing commercially available image quality assessment phantom and by following the IEC 62220-3 formalism.

  13. Figure of Image Quality and Information Capacity in Digital Mammography

    PubMed Central

    Michail, Christos M.; Kalyvas, Nektarios E.; Valais, Ioannis G.; Fudos, Ioannis P.; Fountos, George P.; Dimitropoulos, Nikos; Kandarakis, Ioannis S.

    2014-01-01

    Objectives. In this work, a simple technique to assess the image quality characteristics of the postprocessed image is developed and an easy to use figure of image quality (FIQ) is introduced. This FIQ characterizes images in terms of resolution and noise. In addition information capacity, defined within the context of Shannon's information theory, was used as an overall image quality index. Materials and Methods. A digital mammographic image was postprocessed with three digital filters. Resolution and noise were calculated via the Modulation Transfer Function (MTF), the coefficient of variation, and the figure of image quality. In addition, frequency dependent parameters such as the noise power spectrum (NPS) and noise equivalent quanta (NEQ) were estimated and used to assess information capacity. Results. FIQs for the “raw image” data and the image processed with the “sharpen edges” filter were found 907.3 and 1906.1, correspondingly. The information capacity values were 60.86 × 103 and 78.96 × 103 bits/mm2. Conclusion. It was found that, after the application of the postprocessing techniques (even commercial nondedicated software) on the raw digital mammograms, MTF, NPS, and NEQ are improved for medium to high spatial frequencies leading to resolving smaller structures in the final image. PMID:24895593

  14. [Are mortality indicators acceptable indicators for the quality of health care?].

    PubMed

    Ravaud, P; Giraudeau, B; Roux, P M; Durieux, P

    1999-10-01

    IMPORTANCE OF PUBLISHING MORTALITY RATES: Mortality rates for certain interventions or disease states have been used over the last decade as indicators of the quality of care provided by a given hospital, unit or, medical team. If published, these rates would be a useful tool for decision makers in the process of fund allocations, for public information, and for promoting improved care in hospitals or units with a low classification. METHODOLOGICAL LIMITATIONS: It is difficult to adjust an indicator of mortality to disease-related risk factors and any modification of this adjustment can have major consequences on the validity of subsequent comparisons. The differences in mortality observed between hospitals and physicians can reflect not only differences in quality of care but also differences in approaches to disease-related risk factors, therapeutic choices, or coding practices. The lack of statistical power is a major limiting factor in interpreting differences in mortality rates. To evidence a statistically significant difference in mortality between two hospitals whose rates are respectively 0.5% and 1% (for example in total hip replacement patients), it would be necessary to include 4673 patients, a number which would correspond to 20 years data for a hospital performing 230 interventions per year. Consequently, the number of interventions performed in the most active hospitals would not be sufficient to make such comparisons. LIMITATIONS AND COUNTER EFFECTS: Some studies have demonstrated that the publication of mortality rates does not have a major influence on patients' decisions nor on physicians' choice of a referral hospital. It would have no effect on improving health care quality of the institutions cited. One the contrary, certain counter effects have been observed: modification in patient recruitment, higher-risk patients being referred to hospitals with unpublished mortality rates. For many authors, procedure indicators are more pertinent than

  15. Image quality evaluation and control of computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Hiroshi; Yamaguchi, Takeshi; Uetake, Hiroki

    2016-03-01

    Image quality of the computer-generated holograms are usually evaluated subjectively. For example, the re- constructed image from the hologram is compared with other holograms, or evaluated by the double-stimulus impairment scale method to compare with the original image. This paper proposes an objective image quality evaluation of a computer-generated hologram by evaluating both diffraction efficiency and peak signal-to-noise ratio. Theory and numerical experimental results are shown on Fourier transform transmission hologram of both amplitude and phase modulation. Results without the optimized random phase show that the amplitude transmission hologram gives better peak signal-to noise ratio, but the phase transmission hologram provides about 10 times higher diffraction efficiency to the amplitude type. As an optimized phase hologram, Kinoform is evaluated. In addition, we investigate to control image quality by non-linear operation.

  16. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  17. "It's all about acceptance": A qualitative study exploring a model of positive body image for people with spinal cord injury.

    PubMed

    Bailey, K Alysse; Gammage, Kimberley L; van Ingen, Cathy; Ditor, David S

    2015-09-01

    Using modified constructivist grounded theory, the purpose of the present study was to explore positive body image experiences in people with spinal cord injury. Nine participants (five women, four men) varying in age (21-63 years), type of injury (C3-T7; complete and incomplete), and years post-injury (4-36 years) were recruited. The following main categories were found: body acceptance, body appreciation and gratitude, social support, functional gains, independence, media literacy, broadly conceptualizing beauty, inner positivity influencing outer demeanour, finding others who have a positive body image, unconditional acceptance from others, religion/spirituality, listening to and taking care of the body, managing secondary complications, minimizing pain, and respect. Interestingly, there was consistency in positive body image characteristics reported in this study with those found in previous research, demonstrating universality of positive body image. However, unique characteristics (e.g., resilience, functional gains, independence) were also reported demonstrating the importance of exploring positive body image in diverse groups. PMID:26002149

  18. Contrast vs noise effects on image quality

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Corse, N.; Rotman, Stanley R.; Kopeika, Norman S.

    1996-11-01

    Low noise images are contract-limited, and image restoration techniques can improve resolution significantly. However, as noise level increases, resolution improvements via image processing become more limited because image restoration increases noise. This research attempts to construct a reliable quantitative means of characterizing the perceptual difference between target and background. A method is suggested for evaluating the extent to which it is possible to discriminate an object which has merged with its surroundings, in noise-limited and contrast limited images, i.e., how hard it would be for an observer to recognize the object against various backgrounds as a function of noise level. The suggested model will be a first order model to begin with, using a regular bar-chart with additive uncorrelated Gaussian noise degraded by standard atmospheric blurring filters. The second phase will comprise a model dealing with higher-order images. This computational model relates the detectability or distinctness of the object to measurable parameters. It also must characterize human perceptual response, i.e. the model must develop metrics which are highly correlated to the ease or difficulty which the human observer experiences in discerning the target from its background. This requirement can be fulfilled only by conducting psychophysical experiments quantitatively comparing the perceptual evaluations of the observers with the results of the mathematical model.

  19. Objective image quality assessment based on support vector regression.

    PubMed

    Narwaria, Manish; Lin, Weisi

    2010-03-01

    Objective image quality estimation is useful in many visual processing systems, and is difficult to perform in line with the human perception. The challenge lies in formulating effective features and fusing them into a single number to predict the quality score. In this brief, we propose a new approach to address the problem, with the use of singular vectors out of singular value decomposition (SVD) as features for quantifying major structural information in images and then support vector regression (SVR) for automatic prediction of image quality. The feature selection with singular vectors is novel and general for gauging structural changes in images as a good representative of visual quality variations. The use of SVR exploits the advantages of machine learning with the ability to learn complex data patterns for an effective and generalized mapping of features into a desired score, in contrast with the oft-utilized feature pooling process in the existing image quality estimators; this is to overcome the difficulty of model parameter determination for such a system to emulate the related, complex human visual system (HVS) characteristics. Experiments conducted with three independent databases confirm the effectiveness of the proposed system in predicting image quality with better alignment with the HVS's perception than the relevant existing work. The tests with untrained distortions and databases further demonstrate the robustness of the system and the importance of the feature selection. PMID:20100674

  20. Imaging quality full chip verification for yield improvement

    NASA Astrophysics Data System (ADS)

    Yang, Qing; Zhou, CongShu; Quek, ShyueFong; Lu, Mark; Foong, YeeMei; Qiu, JianHong; Pandey, Taksh; Dover, Russell

    2013-04-01

    Basic image intensity parameters, like maximum and minimum intensity values (Imin and Imax), image logarithm slope (ILS), normalized image logarithm slope (NILS) and mask error enhancement factor (MEEF) , are well known as indexes of photolithography imaging quality. For full chip verification, hotspot detection is typically based on threshold values for line pinching or bridging. For image intensity parameters it is generally harder to quantify an absolute value to define where the process limit will occur, and at which process stage; lithography, etch or post- CMP. However it is easy to conclude that hot spots captured by image intensity parameters are more susceptible to process variation and very likely to impact yield. In addition these image intensity hot spots can be missed by using resist model verification because the resist model normally is calibrated by the wafer data on a single resist plane and is an empirical model which is trying to fit the resist critical dimension by some mathematic algorithm with combining optical calculation. Also at resolution enhancement technology (RET) development stage, full chip imaging quality check is also a method to qualify RET solution, like Optical Proximity Correct (OPC) performance. To add full chip verification using image intensity parameters is also not as costly as adding one more resist model simulation. From a foundry yield improvement and cost saving perspective, it is valuable to quantify the imaging quality to find design hot spots to correctly define the inline process control margin. This paper studies the correlation between image intensity parameters and process weakness or catastrophic hard failures at different process stages. It also demonstrated how OPC solution can improve full chip image intensity parameters. Rigorous 3D resist profile simulation across the full height of the resist stack was also performed to identify a correlation to the image intensity parameter. A methodology of post-OPC full

  1. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  2. Acceptability of Sexually Explicit Images in HIV Prevention Messages Targeting Men Who Have Sex With Men.

    PubMed

    Iantaffi, Alex; Wilkerson, J Michael; Grey, Jeremy A; Rosser, B R Simon

    2015-01-01

    Sexually explicit media (SEM) have been used in HIV-prevention advertisements to engage men who have sex with men (MSM) and to communicate content. These advertisements exist within larger discourses, including a dominant heteronormative culture and a growing homonormative culture. Cognizant of these hegemonic cultures, this analysis examined the acceptable level of sexual explicitness in prevention advertisements. Seventy-nine MSM participated in 13 online focus groups, which were part of a larger study of SEM. Three macro themes-audience, location, and community representation-emerged from the analysis, as did the influence of homonormativity on the acceptability of SEM in HIV-prevention messages. PMID:26075485

  3. Quantitative Prediction of Computational Quality (so the S and C Folks will Accept it)

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Luckring, James M.; Morrison, Joseph H.

    2004-01-01

    Our choice of title may seem strange but we mean each word. In this talk, we are not going to be concerned with computations made "after the fact", i.e. those for which data are available and which are being conducted for explanation and insight. Here we are interested in preventing S&C design problems by finding them through computation before data are available. For such a computation to have any credibility with those who absorb the risk, it is necessary to quantitatively PREDICT the quality of the computational results.

  4. A qualitative and quantitative analysis of radiation dose and image quality of computed tomography images using adaptive statistical iterative reconstruction.

    PubMed

    Hussain, Fahad Ahmed; Mail, Noor; Shamy, Abdulrahman M; Suliman, Alghamdi; Saoudi, Abdelhamid

    2016-01-01

    Image quality is a key issue in radiology, particularly in a clinical setting where it is important to achieve accurate diagnoses while minimizing radiation dose. Some computed tomography (CT) manufacturers have introduced algorithms that claim significant dose reduction. In this study, we assessed CT image quality produced by two reconstruction algorithms provided with GE Healthcare's Discovery 690 Elite positron emission tomography (PET) CT scanner. Image quality was measured for images obtained at various doses with both conventional filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR) algorithms. A stan-dard CT dose index (CTDI) phantom and a pencil ionization chamber were used to measure the CT dose at 120 kVp and an exposure of 260 mAs. Image quality was assessed using two phantoms. CT images of both phantoms were acquired at tube voltage (kV) of 120 with exposures ranging from 25 mAs to 400 mAs. Images were reconstructed using FBP and ASIR ranging from 10% to 100%, then analyzed for noise, low-contrast detectability, contrast-to-noise ratio (CNR), and modulation transfer function (MTF). Noise was 4.6 HU in water phantom images acquired at 260 mAs/FBP 120 kV and 130 mAs/50% ASIR 120 kV. The large objects (fre-quency < 7 lp/cm) retained fairly acceptable image quality at 130 mAs/50% ASIR, compared to 260 mAs/FBP. The application of ASIR for small objects (frequency >7 lp/cm) showed poor visibility compared to FBP at 260 mAs and even worse for images acquired at less than 130 mAs. ASIR blending more than 50% at low dose tends to reduce contrast of small objects (frequency >7 lp/cm). We concluded that dose reduction and ASIR should be applied with close attention if the objects to be detected or diagnosed are small (frequency > 7 lp/cm). Further investigations are required to correlate the small objects (frequency > 7 lp/cm) to patient anatomy and clinical diagnosis. PMID:27167261

  5. Image quality based x-ray dose control in cardiac imaging

    NASA Astrophysics Data System (ADS)

    Davies, Andrew G.; Kengyelics, Stephen M.; Gislason-Lee, Amber J.

    2015-03-01

    An automated closed-loop dose control system balances the radiation dose delivered to patients and the quality of images produced in cardiac x-ray imaging systems. Using computer simulations, this study compared two designs of automatic x-ray dose control in terms of the radiation dose and quality of images produced. The first design, commonly in x-ray systems today, maintained a constant dose rate at the image receptor. The second design maintained a constant image quality in the output images. A computer model represented patients as a polymethylmetacrylate phantom (which has similar x-ray attenuation to soft tissue), containing a detail representative of an artery filled with contrast medium. The model predicted the entrance surface dose to the phantom and contrast to noise ratio of the detail as an index of image quality. Results showed that for the constant dose control system, phantom dose increased substantially with phantom size (x5 increase between 20 cm and 30 cm thick phantom), yet the image quality decreased by 43% for the same thicknesses. For the constant quality control, phantom dose increased at a greater rate with phantom thickness (>x10 increase between 20 cm and 30 cm phantom). Image quality based dose control could tailor the x-ray output to just achieve the quality required, which would reduce dose to patients where the current dose control produces images of too high quality. However, maintaining higher levels of image quality for large patients would result in a significant dose increase over current practice.

  6. Raman chemical imaging technology for food safety and quality evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Raman chemical imaging combines Raman spectroscopy and digital imaging to visualize composition and morphology of a target. This technique offers great potential for food safety and quality research. Most commercial Raman instruments perform measurement at microscopic level, and the spatial range ca...

  7. Quality attributes and consumer acceptance of new ready-to-eat frozen restructured chicken.

    PubMed

    de Almeida, Marcio Aurelio; Villanueva, Nilda Doris Montes; Gonçalves, José Ricardo; Contreras-Castillo, Carmen J

    2015-05-01

    The aim of the present study was to develop a new restructured product, cooked and frozen ready-to-eat product that was prepared with boneless chicken meat (breast and drumstick) and mechanically separated chicken meat (MSCM). Non-meat ingredients, such as transglutaminase (TG) and egg albumin powder, were tested to obtain a better strength of adhesion between the meat particles. Five formulations for restructured chicken were developed as follows: T1 (1 % transglutaminase), T2 (1 % transglutaminase and 15 % MSCM), T3 (1 % egg albumin powder), T4 (1 % egg albumin powder and 15 % MSCM) and T5 (1 % transglutaminase, 1 % egg albumin powder and 15 % MSCM). The results of the experiment showed a greater luminosity (L*) in the treatments with TG (T1) and albumin (T3). The treatments without MSCM (T1 and T3) presented significantly lower mean values for redness (a*) when compared to treatments with MSCM (T2, T4 and T5) (p ≤ 0.05). No significant differences were noted between the treatments (p ≥ 0.05) when analyzing the percentage of total saturated fatty acids (SFA), polyunsaturated fatty acids (PUFA) and cholesterol content. Consumer testing showed a high acceptance of the restructured products in all evaluated attributes. Similarly, with regard to the purchase intention, consumers mostly expressed that they would probably or certainly buy the products, for treatments T1, T2, T3 and T5. Moreover, the meat cuts with no commercial value, can transform into new ready-to-eat products that have a high probability of success in the market. PMID:25892785

  8. A feature-enriched completely blind image quality evaluator.

    PubMed

    Lin Zhang; Lei Zhang; Bovik, Alan C

    2015-08-01

    Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm. PMID:25915960

  9. A patient image-based technique to assess the image quality of clinical chest radiographs

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Samei, Ehsan; Luo, Hui; Dobbins, James T., III; McAdams, H. Page; Wang, Xiaohui; Sehnert, William J.; Barski, Lori; Foos, David H.

    2011-03-01

    Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of factors such as the capture system DQE and MTF, the exposure technique, and the particular image processing method and processing parameters. However, when assessing a clinical image, radiologists seldom refer to these factors, but rather examine several specific regions of the image to see whether the image is suitable for diagnosis. In this work, we developed a new strategy to learn and simulate radiologists' evaluation process on actual clinical chest images. Based on this strategy, a preliminary study was conducted on 254 digital chest radiographs (38 AP without grids, 35 AP with 6:1 ratio grids and 151 PA with 10:1 ratio grids). First, ten regional based perceptual qualities were summarized through an observer study. Each quality was characterized in terms of a physical quantity measured from the image, and as a first step, the three physical quantities in lung region were then implemented algorithmically. A pilot observer study was performed to verify the correlation between image perceptual qualities and physical quantitative qualities. The results demonstrated that our regional based metrics have promising performance for grading perceptual properties of chest radiographs.

  10. Analysis of the Effects of Image Quality on Digital Map Generation from Satellite Images

    NASA Astrophysics Data System (ADS)

    Kim, H.; Kim, D.; Kim, S.; Kim, T.

    2012-07-01

    High resolution satellite images are widely used to produce and update a digital map since they became widely available. It is well known that the accuracy of digital map produced from satellite images is decided largely by the accuracy of geometric modelling. However digital maps are made by a series of photogrammetric workflow. Therefore the accuracy of digital maps are also affected by the quality of satellite images, such as image interpretability. For satellite images, parameters such as Modulation Transfer Function(MTF), Signal to Noise Ratio(SNR) and Ground Sampling Distance(GSD) are used to present images quality. Our previous research stressed that such quality parameters may not represent the quality of image products such as digital maps and that parameters for image interpretability such as Ground Resolved Distance(GRD) and National Imagery Interpretability Rating Scale(NIIRS) need to be considered. In this study, we analyzed the effects of the image quality on accuracy of digital maps produced by satellite images. QuickBird, IKONOS and KOMPSAT-2 imagery were used to analyze as they have similar GSDs. We measured various image quality parameters mentioned above from these images. Then we produced digital maps from the images using a digital photogrammetric workstation. We analyzed the accuracy of the digital maps in terms of their location accuracy and their level of details. Then we compared the correlation between various image quality parameters and the accuracy of digital maps. The results of this study showed that GRD and NIIRS were more critical for map production then GSD, MTF or SNR.

  11. Perceived quality of wood images influenced by the skewness of image histogram

    NASA Astrophysics Data System (ADS)

    Katsura, Shigehito; Mizokami, Yoko; Yaguchi, Hirohisa

    2015-08-01

    The shape of image luminance histograms is related to material perception. We investigated how the luminance histogram contributed to improvements in the perceived quality of wood images by examining various natural wood and adhesive vinyl sheets with printed wood grain. In the first experiment, we visually evaluated the perceived quality of wood samples. In addition, we measured the colorimetric parameters of the wood samples and calculated statistics of image luminance. The relationship between visual evaluation scores and image statistics suggested that skewness and kurtosis affected the perceived quality of wood. In the second experiment, we evaluated the perceived quality of wood images with altered luminance skewness and kurtosis using a paired comparison method. Our result suggests that wood images are more realistic if the skewness of the luminance histogram is slightly negative.

  12. Image Quality Evalutation on ALOS/PRISM and AVNIR-2

    NASA Astrophysics Data System (ADS)

    Mukaida, Akira; Imoto, Naritoshi; Tadono, Takeo; Murakami, Hiroshi; Kawamoto, Sachi

    2008-11-01

    Image quality evaluation on ALOS (Advanced Land Observing Satellite) / PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) and AVNIR-2 (Advanced Visible and Near Infrared Radiometer 2) has been carried out during operational phase. This is a report on result of evaluation for image quality in terms of MTF (Modulation Transfer Function) and SNR (Signal to Noise Ratio) for both PRISM and AVNIR-2. SNR of PRISM image has been increased following the up dating of radiometric correction and implementation of JPEG noise reduction filter. The result was in range of specification for both sensors.

  13. Effect of optical aberrations on image quality and visual performance

    NASA Astrophysics Data System (ADS)

    Ravikumar, Sowmya

    In addition to the effects of diffraction, retinal image quality in the human eye is degraded by optical aberrations. Although the paraxial geometric optics description of defocus consists of a simple blurred circle whose size determines the extent of blur, in reality the interactions between monochromatic and chromatic aberrations create a complex pattern of retinal image degradation. My thesis work hypothesizes that although both monochromatic and chromatic optical aberrations in general reduce image quality from best achievable, the underlying causes of retinal image quality degradation are characteristic of the nature of the aberration, its interactions with other aberrations as well as the composition of the stimulus. To establish a controlled methodology, a computational model of the retinal image with various levels of aberrations was used to create filters equivalent to those produced by real optical aberrations. Visual performance was measured psychophysically by using these special filters that separately modulated amplitude and phase in the retinal image. In order to include chromatic aberration into the optical interactions, a computational polychromatic model of the eye was created and validated. The model starts with monochromatic wavefront maps and derives a composite white light point-spread function whose quality was assessed using metrics of image quality. Finally, in order to assess the effectiveness of simultaneous multifocal intra-ocular lenses in correcting the eye's optical aberrations, a polychromatic computational model of a pseudophakic eye was constructed. This model incorporated the special chromatic properties unique to an eye corrected with hybrid refractive-diffractive optical elements. Results showed that normal optical aberrations reduced visual performance not only by reducing image contrast but also by altering the phase structure of the image. Longitudinal chromatic aberration had a greater effect on image quality in isolation

  14. Digital image quality measurements by objective and subjective methods from series of parametrically degraded images

    NASA Astrophysics Data System (ADS)

    Tachó, Aura; Mitjà, Carles; Martínez, Bea; Escofet, Jaume; Ralló, Miquel

    2013-11-01

    Many digital image applications like digitization of cultural heritage for preservation purposes operate with compressed files in one or more image observing steps. For this kind of applications JPEG compression is one of the most widely used. Compression level, final file size and quality loss are parameters that must be managed optimally. Although this loss can be monitored by means of objective image quality measurements, the real challenge is to know how it can be related with the perceived image quality by observers. A pictorial image has been degraded by two different procedures. The first, applying different levels of low pass filtering by convolving the image with progressively broad Gauss kernels. The second, saving the original file to a series of JPEG compression levels. In both cases, the objective image quality measurement is done by analysis of the image power spectrum. In order to obtain a measure of the perceived image quality, both series of degraded images are displayed on a computer screen organized in random pairs. The observers are compelled to choose the best image of each pair. Finally, a ranking is established applying Thurstone scaling method. Results obtained by both measurements are compared between them and with other objective measurement method as the Slanted Edge Test.

  15. Total Quality Can Help Your District's Image.

    ERIC Educational Resources Information Center

    Cokeley, Sandra

    1996-01-01

    Describes how educators in the Pearl River School District, Pearl River, New York, have implemented Total Quality Management (TQM) principles to evaluate and improve their effectiveness. Includes two charts that depict key indicators of financial and academic performance and a seven-year profile of the district's budget, enrollment, diploma rate,…

  16. Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc

    2015-03-01

    The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.

  17. Safety and Patient Acceptability of Stellate Ganglion Blockade as a Treatment Adjunct for Combat-Related Post-Traumatic Stress Disorder: A Quality Assurance Initiative

    PubMed Central

    2015-01-01

    OBJECTIVE: To perform a quality assurance and performance improvement project through review of our single center data on the safety and patient acceptability of the stellate ganglion blockade (SGB) procedure for the relief of symptoms related to chronic post-traumatic stress disorder. BACKGROUND: Our interventional pain management service has been offering trials of SGB therapy to assist with the management of the sympathetically mediated anxiety and hyperarousal symptoms of severe and treatment-refractory combat-related PTSD. There have been multiple case series in the literature describing the potential impact of this procedure for PTSD symptom management as well as the safety of image-guided procedures. We wished to ensure that we were performing this procedure safely and that patients were tolerating and accepting of this adjunctive treatment option. METHODS: We conducted a review of our quality assurance and performance improvement data over the past 18 months during which we performed 250 stellate ganglion blocks for the management of PTSD symptoms to detect any potential complications or unanticipated side effects.  We also analyzed responses from an anonymous patient de-identified survey collected regarding the comfort and satisfaction associated with the procedure. RESULTS: We did not identify any immediate post-procedural complications or delayed complications from any of the 250 procedures performed from November 2013 to April 2015. Of the 110 surveys that were returned and tabulated, 100% of the patients surveyed were overall satisfied with our process and with the procedure, 100% said they would recommend the procedure to a friend, and 95% stated that they would be willing to undergo as many repeat procedures as necessary based on little discomfort and tolerable side effects. CONCLUSION: Our quality assurance assessment suggests that in our center the SGB procedure for PTSD is a safe, well-tolerated, and acceptable

  18. Dosimetry and image quality in digital mammography facilities in the State of Minas Gerais, Brazil

    NASA Astrophysics Data System (ADS)

    da Silva, Sabrina Donato; Joana, Geórgia Santos; Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Leyton, Fernando; Nogueira, Maria do Socorro

    2015-11-01

    According to the National Register of Health Care Facilities (CNES), there are approximately 477 mammography systems operating in the state of Minas Gerais, Brazil, of which an estimated 200 are digital apparatus using mainly computerized radiography (CR) or direct radiography (DR) systems. Mammography is irreplaceable in the diagnosis and early detection of breast cancer, the leading cause of cancer death among women worldwide. A high standard of image quality alongside smaller doses and optimization of procedures are essential if early detection is to occur. This study aimed to determine dosimetry and image quality in 68 mammography services in Minas Gerais using CR or DR systems. The data of this study were collected between the years of 2011 and 2013. The contrast-to-noise ratio proved to be a critical point in the image production chain in digital systems, since 90% of services were not compliant in this regard, mainly for larger PMMA thicknesses (60 and 70 mm). Regarding the image noise, only 31% of these were compliant. The average glandular dose found is of concern, since more than half of the services presented doses above acceptable limits. Therefore, despite the potential benefits of using CR and DR systems, the employment of this technology has to be revised and optimized to achieve better quality image and reduce radiation dose as much as possible.

  19. Study of a water quality imager for coastal zone missions

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.; Harrison, E. F.; Wessel, V. W.

    1975-01-01

    The present work surveys water quality user requirements and then determines the general characteristics of an orbiting imager (the Applications Explorer, or AE) dedicated to the measurement of water quality, which could be used as a low-cost means of testing advanced imager concepts and assessing the ability of imager techniques to meet the goals of a comprehensive water quality monitoring program. The proposed imager has four spectral bands, a spatial resolution of 25 meters, and swath width of 36 km with a pointing capability of 330 km. Silicon photodetector arrays, pointing systems, and several optical features are included. A nominal orbit of 500 km altitude at an inclination of 50 deg is recommended.

  20. Quality evaluation of extra high quality images based on key assessment word

    NASA Astrophysics Data System (ADS)

    Kameda, Masashi; Hayashi, Hidehiko; Akamatsu, Shigeru; Miyahara, Makoto M.

    2001-06-01

    An all encompassing goal of our research is to develop an extra high quality imaging system which is able to convey a high level artistic impression faithfully. We have defined a high order sensation as such a high level artistic impression, and it is supposed that the high order sensation is expressed by the combination of the psychological factor which can be described by plural assessment words. In order to pursue the quality factors that are important for the reproduction of the high order sensation, we have focused on the image quality evaluation of the extra high quality images using the assessment words considering the high order sensation. In this paper, we have obtained the hierarchical structure between the collected assessment words and the principles of European painting based on the conveyance model of the high order sensation, and we have determined a key assessment word 'plasticity' which is able to evaluate the reproduction of the high order sensation more accurately. The results of the subjective assessment experiments using the prototype of the developed extra high quality imaging system have shown that the obtained key assessment word 'plasticity' is the most appropriate assessment word to evaluate the image quality of the extra high quality images quasi-quantitatively.

  1. Imaging quality assessment of multi-modal miniature microscope.

    PubMed

    Lee, Junwon; Rogers, Jeremy; Descour, Michael; Hsu, Elizabeth; Aaron, Jesse; Sokolov, Konstantin; Richards-Kortum, Rebecca

    2003-06-16

    We are developing a multi-modal miniature microscope (4M device) to image morphology and cytochemistry in vivo and provide better delineation of tumors. The 4M device is designed to be a complete microscope on a chip, including optical, micro-mechanical, and electronic components. It has advantages such as compact size and capability for microscopic-scale imaging. This paper presents an optics-only prototype 4M device, the very first imaging system made of sol-gel material. The microoptics used in the 4M device has a diameter of 1.3 mm. Metrology of the imaging quality assessment of the prototype device is presented. We describe causes of imaging performance degradation in order to improve the fabrication process. We built a multi-modal imaging test-bed to measure first-order properties and to assess the imaging quality of the 4M device. The 4M prototype has a field of view of 290 microm in diameter, a magnification of -3.9, a working distance of 250 microm and a depth of field of 29.6+/-6 microm. We report the modulation transfer function (MTF) of the 4M device as a quantitative metric of imaging quality. Based on the MTF data, we calculated a Strehl ratio of 0.59. In order to investigate the cause of imaging quality degradation, the surface characterization of lenses in 4M devices is measured and reported. We also imaged both polystyrene microspheres similar in size to epithelial cell nuclei and cervical cancer cells. Imaging results indicate that the 4M prototype can resolve cellular detail necessary for detection of precancer. PMID:19466016

  2. Image gathering and restoration - Information and visual quality

    NASA Technical Reports Server (NTRS)

    Mccormick, Judith A.; Alter-Gartenberg, Rachel; Huck, Friedrich O.

    1989-01-01

    A method is investigated for optimizing the end-to-end performance of image gathering and restoration for visual quality. To achieve this objective, one must inevitably confront the problems that the visual quality of restored images depends on perceptual rather than mathematical considerations and that these considerations vary with the target, the application, and the observer. The method adopted in this paper is to optimize image gathering informationally and to restore images interactively to obtain the visually preferred trade-off among fidelity resolution, sharpness, and clarity. The results demonstrate that this method leads to significant improvements in the visual quality obtained by the traditional digital processing methods. These traditional methods allow a significant loss of visual quality to occur because they treat the design of the image-gathering system and the formulation of the image-restoration algorithm as two separate tasks and fail to account for the transformations between the continuous and the discrete representations in image gathering and reconstruction.

  3. The effect of image quality and forensic expertise in facial image comparisons.

    PubMed

    Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice

    2015-03-01

    Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness. PMID:25537273

  4. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  5. Peripheral Aberrations and Image Quality for Contact Lens Correction

    PubMed Central

    Shen, Jie; Thibos, Larry N.

    2011-01-01

    Purpose Contact lenses reduced the degree of hyperopic field curvature present in myopic eyes and rigid contact lenses reduced sphero-cylindrical image blur on the peripheral retina, but their effect on higher order aberrations and overall optical quality of the eye in the peripheral visual field is still unknown. The purpose of our study was to evaluate peripheral wavefront aberrations and image quality across the visual field before and after contact lens correction. Methods A commercial Hartmann-Shack aberrometer was used to measure ocular wavefront errors in 5° steps out to 30° of eccentricity along the horizontal meridian in uncorrected eyes and when the same eyes are corrected with soft or rigid contact lenses. Wavefront aberrations and image quality were determined for the full elliptical pupil encountered in off-axis measurements. Results Ocular higher-order aberrations increase away from fovea in the uncorrected eye. Third-order aberrations are larger and increase faster with eccentricity compared to the other higher-order aberrations. Contact lenses increase all higher-order aberrations except 3rd-order Zernike terms. Nevertheless, a net increase in image quality across the horizontal visual field for objects located at the foveal far point is achieved with rigid lenses, whereas soft contact lenses reduce image quality. Conclusions Second order aberrations limit image quality more than higher-order aberrations in the periphery. Although second-order aberrations are reduced by contact lenses, the resulting gain in image quality is partially offset by increased amounts of higher-order aberrations. To fully realize the benefits of correcting higher-order aberrations in the peripheral field requires improved correction of second-order aberrations as well. PMID:21873925

  6. Image science and image-quality research in the Optical Sciences Center

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Myers, Kyle J.

    2014-09-01

    This paper reviews the history of research into imaging and image quality at the Optical Sciences Center (OSC), with emphasis on the period 1970-1990. The work of various students in the areas of psychophysical studies of human observers of images; mathematical model observers; image simulation and analysis, and the application of these methods to radiology and nuclear medicine is summarized. The rapid progress in computational power, at OSC and elsewhere, which enabled the steady advances in imaging and the emergence of a science of imaging, is also traced. The implications of these advances to ongoing research and the current Image Science curriculum at the College of Optical Sciences are discussed.

  7. Medical imaging using ionizing radiation: Optimization of dose and image quality in fluoroscopy

    SciTech Connect

    Jones, A. Kyle; Balter, Stephen; Rauch, Phillip; Wagner, Louis K.

    2014-01-15

    The 2012 Summer School of the American Association of Physicists in Medicine (AAPM) focused on optimization of the use of ionizing radiation in medical imaging. Day 2 of the Summer School was devoted to fluoroscopy and interventional radiology and featured seven lectures. These lectures have been distilled into a single review paper covering equipment specification and siting, equipment acceptance testing and quality control, fluoroscope configuration, radiation effects, dose estimation and measurement, and principles of flat panel computed tomography. This review focuses on modern fluoroscopic equipment and is comprised in large part of information not found in textbooks on the subject. While this review does discuss technical aspects of modern fluoroscopic equipment, it focuses mainly on the clinical use and support of such equipment, from initial installation through estimation of patient dose and management of radiation effects. This review will be of interest to those learning about fluoroscopy, to those wishing to update their knowledge of modern fluoroscopic equipment, to those wishing to deepen their knowledge of particular topics, such as flat panel computed tomography, and to those who support fluoroscopic equipment in the clinic.

  8. APQ-102 imaging radar digital image quality study

    NASA Astrophysics Data System (ADS)

    Griffin, C. R.; Estes, J. M.

    1982-11-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  9. APQ-102 imaging radar digital image quality study

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1982-01-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  10. A STUDY OF THE IMAGE QUALITY OF COMPUTED TOMOGRAPHY ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTED BRAIN IMAGES USING SUBJECTIVE AND OBJECTIVE METHODS.

    PubMed

    Mangat, J; Morgan, J; Benson, E; Båth, M; Lewis, M; Reilly, A

    2016-06-01

    The recent reintroduction of iterative reconstruction in computed tomography has facilitated the realisation of major dose saving. The aim of this article was to investigate the possibility of achieving further savings at a site with well-established Adaptive Statistical iterative Reconstruction (ASiR™) (GE Healthcare) brain protocols. An adult patient study was conducted with observers making visual grading assessments using image quality criteria, which were compared with the frequency domain metrics, noise power spectrum and modulation transfer function. Subjective image quality equivalency was found in the 40-70% ASiR™ range, leading to the proposal of ranges for the objective metrics defining acceptable image quality. Based on the findings of both the patient-based and objective studies of the ASiR™/tube-current combinations tested, 60%/305 mA was found to fall within all, but one, of these ranges. Therefore, it is recommended that an ASiR™ level of 60%, with a noise index of 12.20, is a viable alternative to the currently used protocol featuring a 40% ASiR™ level and a noise index of 11.20, potentially representing a 16% dose saving. PMID:27103646

  11. Influence of oak maturation regimen on composition, sensory properties, quality, and consumer acceptability of cabernet sauvignon wines.

    PubMed

    Crump, Anna M; Johnson, Trent E; Wilkinson, Kerry L; Bastian, Susan E P

    2015-02-11

    Oak barrels have long been the preferred method for oak maturation of wine, but barrels contribute significantly to production costs, so alternate oak maturation regimens have been introduced, particularly for wines at lower price points. To date, few studies have investigated consumers' acceptance of wines made using non-traditional oak treatments. In this study, two Cabernet Sauvignon wines were aged using traditional (i.e., barrel) and/or alternative (i.e., stainless steel or plastic tanks and vats, with oak wood added) maturation regimens. Chemical and sensory analyses were subsequently performed to determine the influence on wine composition and sensory properties, that is, the presence of key oak-derived volatile compounds and perceptible oak aromas and flavor. The quality of a subset of wines was rated by a panel of 10 wine experts using a 20-point scoring system, with all wines considered technically sound. Consumer acceptance of wines was also determined. Hedonic ratings ranged from 5.7 to 5.9 (on a 9-point scale), indicating there was no significant difference in consumers' overall liking of each wine. However, segmentation based on individual liking scores identified three distinct clusters comprising consumers with considerably different wine preferences. These results justify wine producers' use of alternative oak maturation regimens to achieve wine styles that appeal to different segments of their target market. PMID:25584640

  12. Ongoing quality control in digital radiography: Report of AAPM Imaging Physics Committee Task Group 151

    SciTech Connect

    Jones, A. Kyle Geiser, William; Heintz, Philip; Goldman, Lee; Jerjian, Khachig; Martin, Melissa; Peck, Donald; Pfeiffer, Douglas; Ranger, Nicole; Yorkston, John

    2015-11-15

    Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist is responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.

  13. Ongoing quality control in digital radiography: Report of AAPM Imaging Physics Committee Task Group 151.

    PubMed

    Jones, A Kyle; Heintz, Philip; Geiser, William; Goldman, Lee; Jerjian, Khachig; Martin, Melissa; Peck, Donald; Pfeiffer, Douglas; Ranger, Nicole; Yorkston, John

    2015-11-01

    Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist is responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography. PMID:26520756

  14. Validation of no-reference image quality index for the assessment of digital mammographic images

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  15. Comparison of Image Quality of Shoulder CT Arthrography Conducted Using 120 kVp and 140 kVp Protocols

    PubMed Central

    Ahn, Se Jin; Chai, Jee Won; Choi, Ja-Young; Yoo, Hye Jin; Kim, Sae Hoon; Kang, Heung Sik

    2014-01-01

    Objective To compare the image quality of shoulder CT arthrography performed using 120 kVp and 140 kVp protocols. Materials and Methods Fifty-four CT examinations were prospectively included. CT scans were performed on each patient at 120 kVp and 140 kVp; other scanning parameters were kept constant. Image qualities were qualitatively and quantitatively compared with respect to noise, contrast, and diagnostic acceptability. Diagnostic acceptabilities were graded using a one to five scale as follows: 1, suboptimal; 2, below average; 3, acceptable; 4, above average; and 5, superior. Radiation doses were also compared. Results Contrast was better at 120 kVp, but noise was greater. No significant differences were observed between the 120 kVp and 140 kVp protocols in terms of diagnostic acceptability, signal-to-noise ratio, or contrast-to-noise ratio. Lowering tube voltage from 140 kVp to 120 kVp reduced the radiation dose by 33%. Conclusion The use of 120 kVp during shoulder CT arthrography reduces radiation dose versus 140 kVp without significant loss of image quality. PMID:25469085

  16. Body image and quality of life in a Spanish population

    PubMed Central

    Lobera, Ignacio Jáuregui; Ríos, Patricia Bolaños

    2011-01-01

    Purpose The aim of the current study was to analyze the psychometric properties, factor structure, and internal consistency of the Spanish version of the Body Image Quality of Life Inventory (BIQLI-SP) as well as its test–retest reliability. Further objectives were to analyze different relationships with key dimensions of psychosocial functioning (ie, self-esteem, presence of psychopathological symptoms, eating and body image-related problems, and perceived stress) and to evaluate differences in body image quality of life due to gender. Patients and methods The sample comprised 417 students without any psychiatric history, recruited from the Pablo de Olavide University and the University of Seville. There were 140 men (33.57%) and 277 women (66.43%), and the mean age was 21.62 years (standard deviation = 5.12). After obtaining informed consent from all participants, the following questionnaires were administered: BIQLI, Eating Disorder Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results The BIQLI-SP shows adequate psychometric properties, and it may be useful to determine the body image quality of life in different physical conditions. A more positive body image quality of life is associated with better self-esteem, better psychological wellbeing, and fewer eating-related dysfunctional attitudes, this being more evident among women. Conclusion The BIQLI-SP may be useful to determine the body image quality of life in different contexts with regard to dermatology, cosmetic and reconstructive surgery, and endocrinology, among others. In these fields of study, a new trend has emerged to assess body image-related quality of life. PMID:21403794

  17. Analysis of image quality for laser display scanner test

    NASA Astrophysics Data System (ADS)

    Specht, H.; Kurth, S.; Billep, D.; Gessner, T.

    2009-02-01

    The scanning laser display technology is one of the most promising technologies for highly integrated projection display applications (e. g. in PDAs, mobile phones or head mounted displays) due to its advantages regarding image quality, miniaturization level and low cost potential. As a couple of research teams found during their investigations on laser scanning projections systems, the image quality of such systems is - beside from laser source and video signal processing - crucially determined by the scan engine, including MEMS scanner, driving electronics, scanning regime and synchronization. Even though a number of technical parameters can be measured with high accuracy, the test procedure is challenging because the influence of these parameters on image quality is often insufficiently understood. Thus, in many cases it is not clear how to define limiting values for characteristic parameters. In this paper the relationship between parameters characterizing the scan engine and their influence on image quality will be discussed. Those include scanner topography, geometry of the path of light as well as trajectory parameters. Understanding this enables a new methodology for testing and characterization of the scan engine, based on evaluation of one or a series of projected test images. Due to the fact that the evaluation process can be easily automated by digital image processing this methodology has the potential to become integrated into the production process of laser displays.

  18. Image quality assessment with manifold and machine learning

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Lebrun, Gilles; Lezoray, Olivier

    2009-01-01

    A crucial step in image compression is the evaluation of its performance, and more precisely the available way to measure the final quality of the compressed image. In this paper, a machine learning expert, providing a final class number is designed. The quality measure is based on a learned classification process in order to respect the one of human observers. Instead of computing a final note, our method classifies the quality using the quality scale recommended by the UIT. This quality scale contains 5 ranks ordered from 1 (the worst quality) to 5 (the best quality). This was done constructing a vector containing many visual attributes. Finally, the final features vector contains more than 40 attibutes. Unfortunatley, no study about the existing interactions between the used visual attributes has been done. A feature selection algorithm could be interesting but the selection is highly related to the further used classifier. Therefore, we prefer to perform dimensionality reduction instead of feature selection. Manifold Learning methods are used to provide a low-dimensional new representation from the initial high dimensional feature space. The classification process is performed on this new low-dimensional representation of the images. Obtained results are compared to the one obtained without applying the dimension reduction process to judge the efficiency of the method.

  19. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  20. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  1. Effects of prebiotic inulin-type fructans on structure, quality, sensory acceptance and glycemic response of gluten-free breads.

    PubMed

    Capriles, Vanessa D; Arêas, José A G

    2013-01-01

    The effect of adding increasing levels of prebiotic inulin-type fructans (ITFs) (0, 4, 8, 10 and 12%) on the sensory and nutritional quality of gluten-free bread (GFB) was assessed. ITFs can provide structure and gas retention during baking, thus improving GFB quality by yielding better specific volume, softer crumb, improved crust and crumb browning with enhanced sensory acceptance. During baking, approximately one-third of the ITFs was lost. The addition of 12% ITFs to the basic formulation is required in order to obtain GFB enriched with 8% ITFs (4 g of fructans per 50 g bread serving size), levels that can provide health benefits. 12% ITFs-addition level decreased GFB glycemic index (from 71 to 48) and glycemic load (from 12 to 8). Prebiotic ITFs are a promising improver for GFB that can provide nutritional (11% dietary fiber content, low glycemic response) and functional benefits to patients with celiac disease, since ITFs are prebiotic ingredients that can also increase calcium absorption. PMID:23032642

  2. Software to model AXAF image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1993-01-01

    , and command function. A translation program has been written to convert FEA output from structural analysis to GRAZTRACE surface deformation file (.dfm file). The program can accept standard output files and list files from COSMOS/M and NASTRAN finite analysis programs. Some interactive options are also provided, such as Cartesian or cylindrical coordinate transformation, coordinate shift and scale, and axial length change. A computerized database for technical documents relating to the AXAF project has been established. Over 5000 technical documents have been entered into the master database. A user can now rapidly retrieve the desired documents relating to the AXAF project. The summary of the work performed under this contract is shown.

  3. Quality assurance of ultrasound imaging instruments by monitoring the monitor.

    PubMed

    Walker, J B; Thorne, G C; Halliwell, M

    1993-11-01

    Ultrasound quality assurance (QA) is a means of assuring the constant performance of an ultrasound instrument. A novel 'ultrasound image analyser' has been developed to allow objective, accurate and repeatable measurement of the image displayed on the ultrasound screen, i.e. as seen by the operator. The analyser uses a television camera/framestore combination to digitize and analyse this image. A QA scheme is described along with the procedures necessary to obtain a repeatable measurement of the image so that comparisons with earlier good images can be made. These include repositioning the camera and resetting the video display characteristics. The advantages of using the analyser over other methods are discussed. It is concluded that the analyser has distinct advantages over subjective image assessment methods and will be a valuable addition to current ultrasound QA programmes. PMID:8272435

  4. Investigation of perceptual attributes for mobile display image quality

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Xu, Haisong; Wang, Qing; Wang, Zhehong; Li, Haifeng

    2013-08-01

    Large-scale psychophysical experiments are carried out on two types of mobile displays to evaluate the perceived image quality (IQ). Eight perceptual attributes, i.e., naturalness, colorfulness, brightness, contrast, sharpness, clearness, preference, and overall IQ, are visually assessed via categorical judgment method for various application types of test images, which were manipulated by different methods. Their correlations are deeply discussed, and further factor analysis revealed the two essential components to describe the overall IQ, i.e., the component of image detail aspect and the component of color information aspect. Clearness and naturalness are regarded as two principal factors for natural scene images, whereas clearness and colorfulness were selected as key attributes affecting the overall IQ for other application types of images. Accordingly, based on these selected attributes, two kinds of empirical models are built to predict the overall IQ of mobile displays for different application types of images.

  5. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  6. Exploratory survey of image quality on CR digital mammography imaging systems in Mexico.

    PubMed

    Gaona, E; Rivera, T; Arreola, M; Franco, J; Molina, N; Alvarez, B; Azorín, C G; Casian, G

    2014-01-01

    The purpose of this study was to assess the current status of image quality and dose in computed radiographic digital mammography (CRDM) systems. Studies included CRDM systems of various models and manufacturers which dose and image quality comparisons were performed. Due to the recent rise in the use of digital radiographic systems in Mexico, CRDM systems are rapidly replacing conventional film-screen systems without any regard to quality control or image quality standards. Study was conducted in 65 mammography facilities which use CRDM systems in the Mexico City and surrounding States. The systems were tested as used clinically. This means that the dose and beam qualities were selected using the automatic beam selection and photo-timed features. All systems surveyed generate laser film hardcopies for the radiologist to read on a scope or mammographic high luminance light box. It was found that 51 of CRDM systems presented a variety of image artefacts and non-uniformities arising from inadequate acquisition and processing, as well as from the laser printer itself. Undisciplined alteration of image processing settings by the technologist was found to be a serious prevalent problem in 42 facilities. Only four of them showed an image QC program which is periodically monitored by a medical physicist. The Average Glandular Dose (AGD) in the surveyed systems was estimated to have a mean value of 2.4 mGy. To improve image quality in mammography and make more efficient screening mammographic in early detection of breast cancer is required new legislation. PMID:23938078

  7. Evaluation of Radiation Dose and Image Quality for the Varian Cone Beam Computed Tomography System

    SciTech Connect

    Cheng, Harry C.Y.; Wu, Vincent W.C.; Liu, Eva S.F.; Kwong, Dora L.W.

    2011-05-01

    Purpose: To compare the image quality and dosimetry on the Varian cone beam computed tomography (CBCT) system between software Version 1.4.13 and Version 1.4.11 (referred to as 'new' and 'old' protocols, respectively, in the following text). This study investigated organ absorbed dose, total effective dose, and image quality of the CBCT system for the head-and-neck and pelvic regions. Methods and Materials: A calibrated Farmer chamber and two standard cylindrical Perspex CT dosimetry phantoms with diameter of 16 cm (head phantom) and 32 cm (body phantom) were used to measure the weighted cone-beam computed tomography dose index (CBCTDIw) of the Varian CBCT system. The absorbed dose of different organs was measured in a female anthropomorphic phantom with thermoluminescent dosimeters (TLD) and the total effective dose was estimated according to International Commission on Radiological Protection (ICRP) Publication 103. The dose measurement and image quality were studied for head-and-neck and pelvic regions, and comparison was made between the new and old protocols. Results: The values of the new CBCTDIw head-and-neck and pelvic protocols were 36.6 and 29.4 mGy, respectively. The total effective doses from the new head-and-neck and pelvic protocols were 1.7 and 8.2 mSv, respectively. The absorbed doses of lens for the new 200{sup o} and old 360{sup o} head-and-neck protocols were 3.8 and 59.4 mGy, respectively. The additional secondary cancer risk from daily CBCT might be up to 2.8%. Conclusions: The new Varian CBCT provided volumetric information for image guidance with acceptable image quality and lower radiation dose. This imaging tool gave a better standard for patient daily setup verification.

  8. Determination of pasture quality using airborne hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Pullanagari, R. R.; Kereszturi, G.; Yule, Ian J.; Irwin, M. E.

    2015-10-01

    Pasture quality is a critical determinant which influences animal performance (live weight gain, milk and meat production) and animal health. Assessment of pasture quality is therefore required to assist farmers with grazing planning and management, benchmarking between seasons and years. Traditionally, pasture quality is determined by field sampling which is laborious, expensive and time consuming, and the information is not available in real-time. Hyperspectral remote sensing has potential to accurately quantify biochemical composition of pasture over wide areas in great spatial detail. In this study an airborne imaging spectrometer (AisaFENIX, Specim) was used with a spectral range of 380-2500 nm with 448 spectral bands. A case study of a 600 ha hill country farm in New Zealand is used to illustrate the use of the system. Radiometric and atmospheric corrections, along with automatized georectification of the imagery using Digital Elevation Model (DEM), were applied to the raw images to convert into geocoded reflectance images. Then a multivariate statistical method, partial least squares (PLS), was applied to estimate pasture quality such as crude protein (CP) and metabolisable energy (ME) from canopy reflectance. The results from this study revealed that estimates of CP and ME had a R2 of 0.77 and 0.79, and RMSECV of 2.97 and 0.81 respectively. By utilizing these regression models, spatial maps were created over the imaged area. These pasture quality maps can be used for adopting precision agriculture practices which improves farm profitability and environmental sustainability.

  9. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  10. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  11. Effect of salts of organic acids on Listeria monocytogenes, shelf life, meat quality, and consumer acceptability of beef frankfurters.

    PubMed

    Morey, Amit; Bowers, Jordan W J; Bauermeister, Laura J; Singh, Manpreet; Huang, Tung-Shi; McKee, Shelly R

    2014-01-01

    The objective of this study was to evaluate anti-listerial efficacy of salts of organic acids, and their impact on the quality of frankfurters. Beef frankfurters were manufactured by incorporating organic acids in 5 different combinations: (1) control (no marinade addition; C); (2) sodium lactate (2% wt/wt; SL); (3) potassium lactate (2% wt/wt; PL); (4) sodium citrate (0.75% wt/wt; SC); and (5) sodium lactate (2% wt/wt)/sodium diacetate (0.25% wt/wt; SL/SD). Cooked frankfurters were inoculated with streptomycin-resistant (1500 μg/mL) L. monocytogenes (7 log₁₀ CFU/frank). Inoculated and noninoculated frankfurters were vacuum packaged and stored at 4 °C. Samples were taken weekly up to 10 wk for estimation of L. monocytogenes as well as aerobic plate count (APC) and psychrotrophs (PSY), respectively. Total of 2 independent trials of the entire experiment were conducted. Noninoculated beef frankfurters were evaluated weekly by untrained sensory panelists for 7 wk. SL, PL, and SC treatments did not (P > 0.05) adversely affect consumer acceptability through 8 wk although, SL/SD treatment was significantly (P ≤ 0.05) less preferred across all sensory attributes. SL/SD treatment negatively affected product quality, but was able to control APC, PSY, and L. monocytogenes levels. SC performed similar to the control throughout the 8, 9, and 10 wk storage periods, providing no benefit for inhibiting L. monocytogenes (increasing from 7 logs CFU/frank to 10 logs CFU/frank throughout storage) or extending shelf life of the beef frankfurters. In conclusion, 2% SL and PL, and 2% SL/0.25% SD may be effective L. monocytogenes inhibitors (maintaining inoculation levels of 7 logs CFU/frank during storage), but changes in SL/SD treatment formulation should be studied to improve product quality. PMID:24460770

  12. Radiation dose and image quality for paediatric interventional cardiology

    NASA Astrophysics Data System (ADS)

    Vano, E.; Ubeda, C.; Leyton, F.; Miranda, P.

    2008-08-01

    Radiation dose and image quality for paediatric protocols in a biplane x-ray system used for interventional cardiology have been evaluated. Entrance surface air kerma (ESAK) and image quality using a test object and polymethyl methacrylate (PMMA) phantoms have been measured for the typical paediatric patient thicknesses (4-20 cm of PMMA). Images from fluoroscopy (low, medium and high) and cine modes have been archived in digital imaging and communications in medicine (DICOM) format. Signal-to-noise ratio (SNR), figure of merit (FOM), contrast (CO), contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) have been computed from the images. Data on dose transferred to the DICOM header have been used to test the values of the dosimetric display at the interventional reference point. ESAK for fluoroscopy modes ranges from 0.15 to 36.60 µGy/frame when moving from 4 to 20 cm PMMA. For cine, these values range from 2.80 to 161.10 µGy/frame. SNR, FOM, CO, CNR and HCSR are improved for high fluoroscopy and cine modes and maintained roughly constant for the different thicknesses. Cumulative dose at the interventional reference point resulted 25-45% higher than the skin dose for the vertical C-arm (depending of the phantom thickness). ESAK and numerical image quality parameters allow the verification of the proper setting of the x-ray system. Knowing the increases in dose per frame when increasing phantom thicknesses together with the image quality parameters will help cardiologists in the good management of patient dose and allow them to select the best imaging acquisition mode during clinical procedures.

  13. Radiometric quality evaluation of ZY-02C satellite panchromatic image

    NASA Astrophysics Data System (ADS)

    Zhao, Fengfan; Sun, Ke; Yang, Lei

    2014-11-01

    As the second Chinese civilian high spatial resolution satellite, the ZY-02C satellite was successfully launched on December 22, 2011. In this paper, we used two different methods, subjective evaluation and external evaluation, to evaluate radiation quality of ZY-02C panchromatic image, meanwhile, we compared with quality of CBERS-02B, SPOT-5 satellite. The external evaluation could give us quantitative image quality. The EIFOV of ZY-02C, one of parameters, is less than SPOT-5. The results demonstrate the spatial resolution of ZY-02C is greater than SPOT-5. The subjective results show that the quality of SPOT-5 is little preferable to ZY-02C - CBERS-02B, and the quality of ZY-02C is better than CBERS-02B for most land-cover types. The results in the subjective evaluation and the external evaluation show the excellent agreement. Therefore the comprehensive result of the image quality will be got based on combining parameters introduced in this paper.

  14. Compressed image quality metric based on perceptually weighted distortion.

    PubMed

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  15. Can home care maintain an acceptable quality of life for patients with terminal cancer and their relatives?

    PubMed

    Hinton, J

    1994-01-01

    This prospective study was designed to assess whether patients with terminal cancer, and their relatives, find that competent home care sufficiently maintains comfort and helps adjustment. A random sample from a home care service with readily available beds comprised 77 adults and their relatives who were able and willing to be interviewed separately each week. They were asked the nature and degree of current problems and regular assessments were made of some qualities of life including mood, attitude to the condition, perceived help and preferred place of care. These patients had 90% of their care at home; 29% died at home but 30% were finally admitted for one to three days and 41% for longer. In the final eight weeks, tolerable physical symptoms were volunteered by a mean of 63% each week and psychological symptoms by 17%. Some distress was felt by 11% of patients; this was usually from pain, depression, dyspnoea, anxiety or weakness, and generally did not persist. Relatives suffered grief, strain or their own ill health. Patients' and relatives' reports generally matched except for the strain on carers. Regular assessments found that 64% of patients thought death certain or probable, and 27% thought it possible. Various proportions coped by optimism, fighting their disease, partial suppression or denial, but 50% reached positive acceptance. Relatives were more aware and accepting. About three-quarters of patients and half the relatives were composed, often enjoying life. Serious depression affected 5% of patients and anxiety 4%, but relatives' manifest depression in the later stages increased to 17% and anxiety to 14%. Many consciously disguised their feelings. Treatment was usually praised but realistic preference for home care fell steadily from 100% to 54% of patients and 45% of relatives. At follow-up most relatives approved of where patients had received care and died. PMID:7952369

  16. Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics

    NASA Astrophysics Data System (ADS)

    King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.

    2011-03-01

    The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.

  17. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  18. Flattening filter removal for improved image quality of megavoltage fluoroscopy

    SciTech Connect

    Christensen, James D.; Kirichenko, Alexander; Gayou, Olivier

    2013-08-15

    Purpose: Removal of the linear accelerator (linac) flattening filter enables a high rate of dose deposition with reduced treatment time. When used for megavoltage imaging, an unflat beam has reduced primary beam scatter resulting in sharper images. In fluoroscopic imaging mode, the unflat beam has higher photon count per image frame yielding higher contrast-to-noise ratio. The authors’ goal was to quantify the effects of an unflat beam on the image quality of megavoltage portal and fluoroscopic images.Methods: 6 MV projection images were acquired in fluoroscopic and portal modes using an electronic flat-panel imager. The effects of the flattening filter on the relative modulation transfer function (MTF) and contrast-to-noise ratio were quantified using the QC3 phantom. The impact of FF removal on the contrast-to-noise ratio of gold fiducial markers also was studied under various scatter conditions.Results: The unflat beam had improved contrast resolution, up to 40% increase in MTF contrast at the highest frequency measured (0.75 line pairs/mm). The contrast-to-noise ratio was increased as expected from the increased photon flux. The visualization of fiducial markers was markedly better using the unflat beam under all scatter conditions, enabling visualization of thin gold fiducial markers, the thinnest of which was not visible using the unflat beam.Conclusions: The removal of the flattening filter from a clinical linac leads to quantifiable improvements in the image quality of megavoltage projection images. These gains enable observers to more easily visualize thin fiducial markers and track their motion on fluoroscopic images.

  19. Body image quality of life in eating disorders

    PubMed Central

    Jáuregui Lobera, Ignacio; Bolaños Ríos, Patricia

    2011-01-01

    Purpose: The objective was to examine how body image affects quality of life in an eating-disorder (ED) clinical sample, a non-ED clinical sample, and a nonclinical sample. We hypothesized that ED patients would show the worst body image quality of life. We also hypothesized that body image quality of life would have a stronger negative association with specific ED-related variables than with other psychological and psychopathological variables, mainly among ED patients. On the basis of previous studies, the influence of gender on the results was explored, too. Patients and methods: The final sample comprised 70 ED patients (mean age 22.65 ± 7.76 years; 59 women and 11 men); 106 were patients with other psychiatric disorders (mean age 28.20 ± 6.52; 67 women and 39 men), and 135 were university students (mean age 21.57 ± 2.58; 81 women and 54 men), with no psychiatric history. After having obtained informed consent, the following questionnaires were administered: Body Image Quality of Life Inventory-Spanish version (BIQLI-SP), Eating Disorders Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results: The ED patients’ ratings on the BIQLI-SP were the lowest and negatively scored (BIQLI-SP means: +20.18, +5.14, and −6.18, in the student group, the non-ED patient group, and the ED group, respectively). The effect of body image on quality of life was more negative in the ED group in all items of the BIQLI-SP. Body image quality of life was negatively associated with specific ED-related variables, more than with other psychological and psychopathological variables, but not especially among ED patients. Conclusion: Body image quality of life was affected not only by specific pathologies related to body image disturbances, but also by other psychopathological syndromes. Nevertheless, the greatest effect was related to ED, and seemed to be more negative among men. This finding is the

  20. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  1. Impact of atmospheric aerosols on long range image quality

    NASA Astrophysics Data System (ADS)

    LeMaster, Daniel A.; Eismann, Michael T.

    2012-06-01

    Image quality in high altitude long range imaging systems can be severely limited by atmospheric absorption, scattering, and turbulence. Atmospheric aerosols contribute to this problem by scattering target signal out of the optical path and by scattering in unwanted light from the surroundings. Target signal scattering may also lead to image blurring though, in conventional modeling, this effect is ignored. The validity of this choice is tested in this paper by developing an aerosol modulation transfer function (MTF) model for an inhomogeneous atmosphere and then applying it to real-world scenarios using MODTRAN derived scattering parameters. The resulting calculations show that aerosol blurring can be effectively ignored.

  2. Pyramid wavefront sensor for image quality evaluation of optical system

    NASA Astrophysics Data System (ADS)

    Chen, Zhendong

    2015-08-01

    When the pyramid wavefront sensor is used to evaluate the imaging quality, placed at the focal plane of the aberrated optical system e.g., a telescope, it splits the light into four beams. Four images of the pupil are created on the detector and the detection signals of the pyramid wavefront sensor are calculated with these four intensity patterns, providing information on the derivatives of the aberrated wavefront. Based on the theory of the pyramid wavefront sensor, we are going to develop simulation software and a wavefront detector which can be used to test the imaging quality of the telescope. In our system, the subpupil image intensity through the pyramid sensor is calculated to obtain the aberration of wavefront where the piston, tilt, defocus, spherical, coma, astigmatism and other high level aberrations are separately represented by Zernike polynomials. The imaging quality of the optical system is then evaluated by the subsequent wavefront reconstruction. The performance of our system is to be checked by comparing with the measurements carried out using Puntino wavefront instrument (the method of SH wavefront sensor). Within this framework, the measurement precision of pyramid sensor will be discussed as well through detailed experiments. In general, this project would be very helpful both in our understanding of the principle of the wavefront reconstruction and its future technical applications. So far, we have produced the pyramid and established the laboratory setup of the image quality detecting system based on this wavefront sensor. Preliminary results are obtained, in that we have obtained the intensity images of the four pupils. Additional work is needed to analyze the characteristics of the pyramid wavefront sensor.

  3. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  4. Criterion to Evaluate the Quality of Infrared Small Target Images

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Diao, Wei-He

    2009-01-01

    In this paper, we propose a new criterion to estimate the quality of infrared small target images. To describe the criterion quantitatively, two indicators are defined. One is the “degree of target being confused” that represents the ability of infrared small target image to provide fake targets. The other one is the “degree of target being shielded”, which reflects the contribution of the image to shield the target. Experimental results reveal that this criterion is more robust than the traditional method (Signal-to-Noise Ratio). It is not only valid to infrared small target images which Signal-to-Noise Ratio could correctly describe, but also to the images that the traditional criterion could not accurately estimate. In addition, the results of this criterion can provide information about the cause of background interfering with target detection.

  5. DIANE stationary neutron radiography system image quality and industrial applications

    NASA Astrophysics Data System (ADS)

    Cluzeau, S.; Huet, J.; Le Tourneur, P.

    1994-05-01

    The SODERN neutron radiography laboratory has operated since February 1993 using a sealed tube generator (GENIE 46). An experimental programme of characterization (dosimetry, spectroscopy) has confirmed the expected performances concerning: neutron flux intensity, neutron energy range, residual gamma flux. Results are given in a specific report [2]. This paper is devoted to the image performance reporting. ASTM and specific indicators have been used to test the image quality with various converters and films. The corresponding modulation transfer functions are to be determined from image processing. Some industrial applications have demonstrated the capabilities of the system: corrosion detection in aircraft parts, ammunitions filling testing, detection of polymer lacks in sandwich steel sheets, detection of moisture in a probe for geophysics, residual ceramic cores imaging in turbine blades. Various computerized electronic imaging systems will be tested to improve the industrial capabilities.

  6. Evaluation of Doses and Image Quality in Mammography with Screen-Film, CR, and DR Detectors – Application of the ACR Phantom

    PubMed Central

    Ślusarczyk-Kacprzyk, Wioletta; Skrzyński, Witold; Fabiszewska, Ewa

    2016-01-01

    Summary Background Different methods of image quality evaluation are routinely used for analogue and digital mammography systems in Poland. In the present study, image quality for several screen-film (SF), computed radiography (CR), and fully digital (DR) mammography systems was compared directly with the use of the ACR mammography accreditation phantom. Material/Methods Image quality and mean glandular doses were measured and compared for 47 mammography systems in the Mazovia Voivodeship in Poland, including 26 SF systems, 12 CR systems, and 9 DR systems. The mean glandular dose for the breast simulated by 4.5 cm of PMMA was calculated with methods described in the “European guidelines for quality assurance in breast cancer screening and diagnosis”. Visibility of the structures in the image (fibers, microcalcifications, and masses) was evaluated with the mammographic accreditation ACR phantom. Results Image quality for DR systems was significantly higher than for SF and CR systems. Several SF systems failed to pass the image quality tests because of artifacts. The doses were within acceptable limits for all of the systems, but the doses for the CR systems were significantly higher than for the SF and DR systems. Conclusions The best image quality, at a reasonably low dose, was observed for the DR systems. The CR systems are capable of obtaining the same image quality as the SF systems, but only at a significantly higher dose. The ACR phantom can be routinely used to evaluate image quality for all types of mammographic systems.

  7. Image quality, space-qualified UV interference filters

    NASA Technical Reports Server (NTRS)

    Mooney, Thomas A.

    1992-01-01

    The progress during the contract period is described. The project involved fabrication of image quality, space-qualified bandpass filters in the 200-350 nm spectral region. Ion-assisted deposition (IAD) was applied to produce stable, reasonably durable filter coatings on space compatible UV substrates. Thin film materials and UV transmitting substrates were tested for resistance to simulated space effects.

  8. Perceived interest versus overt visual attention in image quality assessment

    NASA Astrophysics Data System (ADS)

    Engelke, Ulrich; Zhang, Wei; Le Callet, Patrick; Liu, Hantao

    2015-03-01

    We investigate the impact of overt visual attention and perceived interest on the prediction performance of image quality metrics. Towards this end we performed two respective experiments to capture these mechanisms: an eye gaze tracking experiment and a region-of-interest selection experiment. Perceptual relevance maps were created from both experiments and integrated into the design of the image quality metrics. Correlation analysis shows that indeed there is an added value of integrating these perceptual relevance maps. We reveal that the improvement in prediction accuracy is not statistically different between fixation density maps from eye gaze tracking data and region-of-interest maps, thus, indicating the robustness of different perceptual relevance maps for the performance gain of image quality metrics. Interestingly, however, we found that thresholding of region-of-interest maps into binary maps significantly deteriorates prediction performance gain for image quality metrics. We provide a detailed analysis and discussion of the results as well as the conceptual and methodological differences between capturing overt visual attention and perceived interest.

  9. SCID: full reference spatial color image quality metric

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Herbin, M.; Zagrouba, E.

    2009-01-01

    The most used full reference image quality assessments are error-based methods. Thus, these measures are performed by pixel based difference metrics like Delta E ( E), MSE, PSNR, etc. Therefore, a local fidelity of the color is defined. However, these metrics does not correlate well with the perceived image quality. Indeed, they omit the properties of the HVS. Thus, they cannot be a reliable predictor of the perceived visual quality. All this metrics compute the differences pixel to pixel. Therefore, a local fidelity of the color is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we present a novel full reference color metric that is based on characteristics of the human visual system by considering the notion of adjacency. This metric called SCID for Spatial Color Image Difference, is more perceptually correlated than other color differences such as Delta E. The suggested full reference metric is generic and independent of image distortion type. It can be used in different application such as: compression, restoration, etc.

  10. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  11. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  12. A Novel Image Quality Assessment With Globally and Locally Consilient Visual Quality Perception.

    PubMed

    Bae, Sung-Ho; Kim, Munchurl

    2016-05-01

    Computational models for image quality assessment (IQA) have been developed by exploring effective features that are consistent with the characteristics of a human visual system (HVS) for visual quality perception. In this paper, we first reveal that many existing features used in computational IQA methods can hardly characterize visual quality perception for local image characteristics and various distortion types. To solve this problem, we propose a new IQA method, called the structural contrast-quality index (SC-QI), by adopting a structural contrast index (SCI), which can well characterize local and global visual quality perceptions for various image characteristics with structural-distortion types. In addition to SCI, we devise some other perceptually important features for our SC-QI that can effectively reflect the characteristics of HVS for contrast sensitivity and chrominance component variation. Furthermore, we develop a modified SC-QI, called structural contrast distortion metric (SC-DM), which inherits desirable mathematical properties of valid distance metricability and quasi-convexity. So, it can effectively be used as a distance metric for image quality optimization problems. Extensive experimental results show that both SC-QI and SC-DM can very well characterize the HVS's properties of visual quality perception for local image characteristics and various distortion types, which is a distinctive merit of our methods compared with other IQA methods. As a result, both SC-QI and SC-DM have better performances with a strong consilience of global and local visual quality perception as well as with much lower computation complexity, compared with the state-of-the-art IQA methods. The MATLAB source codes of the proposed SC-QI and SC-DM are publicly available online at https://sites.google.com/site/sunghobaecv/iqa. PMID:27046873

  13. Optoelectronic complex inner product for evaluating quality of image segmentation

    NASA Astrophysics Data System (ADS)

    Power, Gregory J.; Awwal, Abdul Ahad S.

    2000-11-01

    In automatic target recognition and machine vision applications, segmentation of the images is a key step. Poor segmentation reduces the recognition performance. For some imaging systems such as MRI and Synthetic Aperture Radar (SAR) it is difficult even for humans to agree on the location of the edge which allows for segmentation. A real- time dynamic approach to determine the quality of segmentation can enable vision systems to refocus of apply appropriate algorithms to ensure high quality segmentation for recognition. A recent approach to evaluate the quality of image segmentation uses percent-pixels-different (PPD). For some cases, PPD provides a reasonable quality evaluation, but it has a weakness in providing a measure for how well the shape of the segmentation matches the true shape. This paper introduces the complex inner product approach for providing a goodness measure for evaluating the segmentation quality based on shape. The complex inner product approach is demonstrated on SAR target chips obtained from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). The results are compared to the PPD approach. A design for an optoelectronic implementation of the complex inner product for dynamic segmentation evaluation is introduced.

  14. Effects of display rendering on HDR image quality assessment

    NASA Astrophysics Data System (ADS)

    Zerman, Emin; Valenzise, Giuseppe; De Simone, Francesca; Banterle, Francesco; Dufaux, Frederic

    2015-09-01

    High dynamic range (HDR) displays use local backlight modulation to produce both high brightness levels and large contrast ratios. Thus, the display rendering algorithm and its parameters may greatly affect HDR visual experience. In this paper, we analyze the impact of display rendering on perceived quality for a specific display (SIM2 HDR47) and for a popular application scenario, i.e., HDR image compression. To this end, we assess whether significant differences exist between subjective quality of compressed images, when these are displayed using either the built-in rendering of the display, or a rendering algorithm developed by ourselves. As a second contribution of this paper, we investigate whether the possibility to estimate the true pixel-wise luminance emitted by the display, offered by our rendering approach, can improve the performance of HDR objective quality metrics that require true pixel-wise luminance as input.

  15. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  16. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures.

    PubMed

    Oszust, Mariusz

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  17. Structural similarity analysis for brain MR image quality assessment

    NASA Astrophysics Data System (ADS)

    Punga, Mirela Visan; Moldovanu, Simona; Moraru, Luminita

    2014-11-01

    Brain MR images are affected and distorted by various artifacts as noise, blur, blotching, down sampling or compression and as well by inhomogeneity. Usually, the performance of pre-processing operation is quantified by using the quality metrics as mean squared error and its related metrics such as peak signal to noise ratio, root mean squared error and signal to noise ratio. The main drawback of these metrics is that they fail to take the structural fidelity of the image into account. For this reason, we addressed to investigate the structural changes related to the luminance and contrast variation (as non-structural distortions) and to denoising process (as structural distortion)through an alternative metric based on structural changes in order to obtain the best image quality.

  18. Improving Image Quality of Bronchial Arteries with Virtual Monochromatic Spectral CT Images

    PubMed Central

    Ma, Guangming; He, Taiping; Yu, Yong; Duan, Haifeng; Yang, Chuangbo

    2016-01-01

    Objective To evaluate the clinical value of using monochromatic images in spectral CT pulmonary angiography to improve image quality of bronchial arteries. Methods We retrospectively analyzed the chest CT images of 38 patients who underwent contrast-enhanced spectral CT. These images included a set of 140kVp polychromatic images and the default 70keV monochromatic images. Using the standard Gemstone Spectral Imaging (GSI) viewer on an advanced workstation (AW4.6,GE Healthcare), an optimal energy level (in keV) for obtaining the best contrast-to-noise ratio (CNR) for the artery could be automatically obtained. The signal-to-noise ratio (SNR), CNR and objective image quality score (1–5) for these 3 image sets (140kVp, 70keV and optimal energy level) were obtained and, statistically compared. The image quality score consistency between the two observers was also evaluated using Kappa test. Results The optimal energy levels for obtaining the best CNR were 62.58±2.74keV.SNR and CNR from the 140kVp polychromatic, 70keV and optimal keV monochromatic images were (16.44±5.85, 13.24±5.52), (20.79±7.45, 16.69±6.27) and (24.9±9.91, 20.53±8.46), respectively. The corresponding subjective image quality scores were 1.97±0.82, 3.24±0.75, and 4.47±0.60. SNR, CNR and subjective scores had significant difference among groups (all p<0.001). The optimal keV monochromatic images were superior to the 70keV monochromatic and 140kVp polychromatic images, and there was high agreement between the two observers on image quality score (kappa>0.80). Conclusions Virtual monochromatic images at approximately 63keV in dual-energy spectral CT pulmonary angiography yielded the best CNR and highest diagnostic confidence for imaging bronchial arteries. PMID:26967737

  19. Influence of physical parameters on radiation protection and image quality in intra-oral radiology

    NASA Astrophysics Data System (ADS)

    Belinato, W.; Souza, D. N.

    2011-10-01

    In the world of diagnostic imaging, radiography is an important supplementary method for dental diagnosis. In radiology, special attention must be paid to the radiological protection of patients and health professionals, and also to image quality for correct diagnosis. In Brazil, the national rules governing the operation of medical and dental radiology were specified in 1998 by the National Sanitary Surveillance Agency, complemented in 2005 by the guide "Medical radiology: security and performance of equipment." In this study, quality control tests were performed in public clinics with dental X-ray equipment in the State of Sergipe, Brazil, with consideration of the physical parameters that influence radiological protection and also the quality of images taken in intra-oral radiography. The accuracy of the exposure time was considered acceptable for equipment with digital timers. Exposure times and focal-spot size variations can lead to increased entrance dose. Increased dose has also been associated with visual processing of radiographic film, which often requires repeating the radiographic examination.

  20. Image quality specification and maintenance for airborne SAR

    NASA Astrophysics Data System (ADS)

    Clinard, Mark S.

    2004-08-01

    Specification, verification, and maintenance of image quality over the lifecycle of an operational airborne SAR begin with the specification for the system itself. Verification of image quality-oriented specification compliance can be enhanced by including a specification requirement that a vendor provide appropriate imagery at the various phases of the system life cycle. The nature and content of the imagery appropriate for each stage of the process depends on the nature of the test, the economics of collection, and the availability of techniques to extract the desired information from the data. At the earliest lifecycle stages, Concept and Technology Development (CTD) and System Development and Demonstration (SDD), the test set could include simulated imagery to demonstrate the mathematical and engineering concepts being implemented thus allowing demonstration of compliance, in part, through simulation. For Initial Operational Test and Evaluation (IOT&E), imagery collected from precisely instrumented test ranges and targets of opportunity consisting of a priori or a posteriori ground-truthed cultural and natural features are of value to the analysis of product quality compliance. Regular monitoring of image quality is possible using operational imagery and automated metrics; more precise measurements can be performed with imagery of instrumented scenes, when available. A survey of image quality measurement techniques is presented along with a discussion of the challenges of managing an airborne SAR program with the scarce resources of time, money, and ground-truthed data. Recommendations are provided that should allow an improvement in the product quality specification and maintenance process with a minimal increase in resource demands on the customer, the vendor, the operational personnel, and the asset itself.

  1. Imaging quality automated measurement of image intensifier based on orthometric phase-shifting gratings.

    PubMed

    Sun, Song; Cao, Yiping

    2016-06-01

    A method for automatically measuring the imaging quality parameters of an image intensifier based on orthometric phase-shifting gratings (OPSG) is proposed. Two sets of phase-shifting gratings, one with a fringe direction at 45° and the other at 135°, are successively projected onto the input port of the image intensifier, and the corresponding deformed patterns modulated by the measured image intensifier on its output port are captured with a CCD camera. Two phases are retrieved from these two sets of deformed patterns by a phase-measuring algorithm. By building the relationship between these retrieved phases, the referential fringe period can be determined accurately. Meanwhile, the distorted phase distribution introduced by the image intensifier can also be efficiently separated wherein the subtle imaging quality information can be further decomposed. Subsequently, the magnification of the image intensifier is successfully measured by fringe period self-calibration. The experimental results have shown the feasibility of the proposed method, which can automatically measure the multiple imaging quality parameters of an image intensifier without human intervention. PMID:27411191

  2. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  3. Investigation of grid performance using simple image quality tests

    PubMed Central

    Bor, Dogan; Birgul, Ozlem; Onal, Umran; Olgar, Turan

    2016-01-01

    Antiscatter grids improve the X-ray image contrast at a cost of patient radiation doses. The choice of appropriate grid or its removal requires a good knowledge of grid characteristics, especially for pediatric digital imaging. The aim of this work is to understand the relation between grid performance parameters and some numerical image quality metrics for digital radiological examinations. The grid parameters such as bucky factor (BF), selectivity (Σ), Contrast improvement factor (CIF), and signal-to-noise improvement factor (SIF) were determined following the measurements of primary, scatter, and total radiations with a digital fluoroscopic system for the thicknesses of 5, 10, 15, 20, and 25 cm polymethyl methacrylate blocks at the tube voltages of 70, 90, and 120 kVp. Image contrast for low- and high-contrast objects and high-contrast spatial resolution were measured with simple phantoms using the same scatter thicknesses and tube voltages. BF and SIF values were also calculated from the images obtained with and without grids. The correlation coefficients between BF values obtained using two approaches (grid parameters and image quality metrics) were in good agreement. Proposed approach provides a quick and practical way of estimating grid performance for different digital fluoroscopic examinations. PMID:27051166

  4. A study of image quality for radar image processing. [synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.

    1982-01-01

    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.

  5. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform. PMID:19813594

  6. Evaluation of image quality of a new CCD-based system for chest imaging

    NASA Astrophysics Data System (ADS)

    Sund, Patrik; Kheddache, Susanne; Mansson, Lars G.; Bath, Magnus; Tylen, Ulf

    2000-04-01

    The Imix radiography system (Qy Imix Ab, Finland)consists of an intensifying screen, optics, and a CCD camera. An upgrade of this system (Imix 2000) with a red-emitting screen and new optics has recently been released. The image quality of Imix (original version), Imix 200, and two storage-phosphor systems, Fuji FCR 9501 and Agfa ADC70 was evaluated in physical terms (DQE) and with visual grading of the visibility of anatomical structures in clinical images (141 kV). PA chest images of 50 healthy volunteers were evaluated by experienced radiologists. All images were evaluated on Siemens Simomed monitors, using the European Quality Criteria. The maximum DQE values for Imix, Imix 2000, Agfa and Fuji were 11%, 14%, 17% and 19%, respectively (141kV, 5μGy). Using the visual grading, the observers rated the systems in the following descending order. Fuji, Imix 2000, Agfa, and Imix. Thus, the upgrade to Imix 2000 resulted in higher DQE values and a significant improvement in clinical image quality. The visual grading agrees reasonably well with the DQE results; however, Imix 2000 received a better score than what could be expected from the DQE measurements. Keywords: CCD Technique, Chest Imaging, Digital Radiography, DQE, Image Quality, Visual Grading Analysis

  7. High-volume image quality assessment systems: tuning performance with an interactive data visualization tool

    NASA Astrophysics Data System (ADS)

    Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael

    1999-03-01

    Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality

  8. Effects of task and image properties on visual-attention deployment in image-quality assessment

    NASA Astrophysics Data System (ADS)

    Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid

    2015-03-01

    It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.

  9. No-reference image quality assessment in the spatial domain.

    PubMed

    Mittal, Anish; Moorthy, Anush Krishna; Bovik, Alan Conrad

    2012-12-01

    We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of "naturalness" in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation. PMID:22910118

  10. TL dosimetry for quality control of CR mammography imaging systems

    NASA Astrophysics Data System (ADS)

    Gaona, E.; Nieto, J. A.; Góngora, J. A. I. D.; Arreola, M.; Enríquez, J. G. F.

    The aim of this work is to estimate the average glandular dose with thermoluminescent (TL) dosimetry and comparison with quality imaging in computed radiography (CR) mammography. For a measuring dose, the Food and Drug Administration (FDA) and the American College of Radiology (ACR) use a phantom, so that dose and image quality are assessed with the same test object. The mammography is a radiological image to visualize early biological manifestations of breast cancer. Digital systems have two types of image-capturing devices, full field digital mammography (FFDM) and CR mammography. In Mexico, there are several CR mammography systems in clinical use, but only one system has been approved for use by the FDA. Mammography CR uses a photostimulable phosphor detector (PSP) system. Most CR plates are made of 85% BaFBr and 15% BaFI doped with europium (Eu) commonly called barium flourohalideE We carry out an exploratory survey of six CR mammography units from three different manufacturers and six dedicated X-ray mammography units with fully automatic exposure. The results show three CR mammography units (50%) have a dose greater than 3.0 mGy without demonstrating improved image quality. The differences between doses averages from TLD system and dosimeter with ionization chamber are less than 10%. TLD system is a good option for average glandular dose measurement for X-rays with a HVL (0.35-0.38 mmAl) and kVp (24-26) used in quality control procedures with ACR Mammography Accreditation Phantom.

  11. Image Quality Analysis of Various Gastrointestinal Endoscopes: Why Image Quality Is a Prerequisite for Proper Diagnostic and Therapeutic Endoscopy

    PubMed Central

    Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo

    2015-01-01

    Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality. PMID:26473119

  12. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  13. Study on classification of pork quality using hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Bai, Jun; Wang, Haibin

    2015-12-01

    The relative problems' research of chilled meat, thawed meat and spoiled meat discrimination by hyperspectral image technique were proposed, such the section of feature wavelengths, et al. First, based on 400 ~ 1000nm range hyperspectral image data of testing pork samples, by K-medoids clustering algorithm based on manifold distance, we select 30 important wavelengths from 753 wavelengths, and thus select 8 feature wavelengths (454.4, 477.5, 529.3, 546.8, 568.4, 580.3, 589.9 and 781.2nm) based on the discrimination value. Then 8 texture features of each image under 8 feature wavelengths were respectively extracted by two-dimensional Gabor wavelets transform as pork quality feature. Finally, we build a pork quality classification model using the fuzzy C-mean clustering algorithm. Through the experiment of extracting feature wavelengths, we found that although the hyperspectral images between adjacent bands have a strong linear correlation, they show a significant non-linear manifold relationship from the entire band. K-medoids clustering algorithm based on manifold distance used in this paper for selecting the characteristic wavelengths, which is more reasonable than traditional principal component analysis (PCA). Through the classification result, we conclude that hyperspectral imaging technology can distinguish among chilled meat, thawed meat and spoiled meat accurately.

  14. Automated quality assurance for image-guided radiation therapy.

    PubMed

    Schreibmann, Eduard; Elder, Eric; Fox, Tim

    2009-01-01

    The use of image-guided patient positioning requires fast and reliable Quality Assurance (QA) methods to ensure the megavoltage (MV) treatment beam coincides with the integrated kilovoltage (kV) or volumetric cone-beam CT (CBCT) imaging and guidance systems. Current QA protocol is based on visually observing deviations of certain features in acquired kV in-room treatment images such as markers, distances, or HU values from phantom specifications. This is a time-consuming and subjective task because these features are identified by human operators. The method implemented in this study automated an IGRT QA protocol by using specific image processing algorithms that rigorously detected phantom features and performed all measurements involved in a classical QA protocol. The algorithm was tested on four different IGRT QA phantoms. Image analysis algorithms were able to detect QA features with the same accuracy as the manual approach but significantly faster. All described tests were performed in a single procedure, with acquisition of the images taking approximately 5 minutes, and the automated software analysis taking less than 1 minute. The study showed that the automated image analysis based procedure may be used as a daily QA procedure because it is completely automated and uses a single phantom setup. PMID:19223842

  15. DES exposure checker: Dark Energy Survey image quality control crowdsourcer

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Sheldon, Erin; Drlica-Wagner, Alex; Rykoff, Eli S.

    2015-11-01

    DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

  16. Metal artifact reduction and image quality evaluation of lumbar spine CT images using metal sinogram segmentation.

    PubMed

    Kaewlek, Titipong; Koolpiruck, Diew; Thongvigitmanee, Saowapak; Mongkolsuk, Manus; Thammakittiphan, Sastrawut; Tritrakarn, Siri-on; Chiewvit, Pipat

    2015-01-01

    Metal artifacts often appear in the images of computed tomography (CT) imaging. In the case of lumbar spine CT images, artifacts disturb the images of critical organs. These artifacts can affect the diagnosis, treatment, and follow up care of the patient. One approach to metal artifact reduction is the sinogram completion method. A mixed-variable thresholding (MixVT) technique to identify the suitable metal sinogram is proposed. This technique consists of four steps: 1) identify the metal objects in the image by using k-mean clustering with the soft cluster assignment, 2) transform the image by separating it into two sinograms, one of which is the sinogram of the metal object, with the surrounding tissue shown in the second sinogram. The boundary of the metal sinogram is then found by the MixVT technique, 3) estimate the new value of the missing data in the metal sinogram by linear interpolation from the surrounding tissue sinogram, 4) reconstruct a modified sinogram by using filtered back-projection and complete the image by adding back the image of the metal object into the reconstructed image to form the complete image. The quantitative and clinical image quality evaluation of our proposed technique demonstrated a significant improvement in image clarity and detail, which enhances the effectiveness of diagnosis and treatment. PMID:26756404

  17. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    SciTech Connect

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A.

    2014-03-15

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  18. Measuring image quality performance on image versions saved with different file format and compression ratio

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Escofet, Jaume; Bover, Toni

    2012-06-01

    Digitization of existing documents containing images is an important body of work for many archives ranging from individuals to institutional organizations. The methods and file formats used in this digitization is usually a trade off between budget, file volume size and image quality, while not necessarily in this order. The use of most commons and standardized file formats, JPEG and TIFF, prompts the operator to decide the compression ratio that affects both the final file volume size and the quality of the resulting image version. The evaluation of the image quality achieved by a system can be done by means of several measures and methods, being the Modulation Transfer Function (MTF) one of most used. The methods employed by the compression algorithms affect in a different way the two basic features of the image contents, edges and textures. Those basic features are too differently affected by the amount of noise generated at the digitization stage. Therefore, the target used in the measurement should be related with the features usually presents in general imaging. This work presents a comparison between the results obtained by measuring the MTF of images taken with a professional camera system and saved in several file formats compression ratios. In order to accomplish with the needs early stated, the MTF measurement has been done by two separate methods using the slanted edge and dead leaves targets respectively. The measurement results are shown and compared related with the respective file volume size.

  19. Exploring V1 by modeling the perceptual quality of images.

    PubMed

    Zhang, Fan; Jiang, Wenfei; Autrusseau, Florent; Lin, Weisi

    2014-01-01

    We propose an image quality model based on phase and amplitude differences between a reference and a distorted image. The proposed model is motivated by the fact that polar representations can separate visual information in a more independent and efficient manner than Cartesian representations in the primary visual cortex (V1). We subsequently estimate the model parameters from a large subjective data set using maximum likelihood methods. By comparing the various model hypotheses on the functional form about the phase and amplitude, we find that: (a) discrimination of visual orientation is important for quality assessment and yet a coarse level of such discrimination seems sufficient; and (b) a product-based amplitude-phase combination before pooling is effective, suggesting an interesting viewpoint about the functional structure of the simple cells and complex cells in V1. PMID:24464165

  20. Image quality vs. sensitivity: fundamental sensor system engineering

    NASA Astrophysics Data System (ADS)

    Schueler, Carl F.

    2008-08-01

    This paper focuses on the fundamental system engineering tradeoff driving almost all remote sensing design efforts, affecting complexity, cost, performance, schedule, and risk: image quality vs. sensitivity. This single trade encompasses every aspect of performance, including radiometric accuracy, dynamic range and precision, as well as spatial, spectral, and temporal coverage and resolution. This single trade also encompasses every aspect of design, including mass, dimensions, power, orbit selection, spacecraft interface, sensor and spacecraft functional trades, pointing or scanning architecture, sensor architecture (e.g., field-of-view, optical form, aperture, f/#, material properties), electronics, mechanical and thermal properties. The relationship between image quality and sensitivity is introduced based on the concepts of modulation transfer function (MTF) and signal-to-noise ratio (SNR) with examples to illustrate the balance to be achieved by the system architect to optimize cost, complexity, performance and risk relative to end-user requirements.

  1. Image-inpainting and quality-guided phase unwrapping algorithm.

    PubMed

    Meng, Lei; Fang, Suping; Yang, Pengcheng; Wang, Leijie; Komori, Masaharu; Kubo, Aizoh

    2012-05-01

    For the wrapped phase map with regional abnormal fringes, a new phase unwrapping algorithm that combines the image-inpainting theory and the quality-guided phase unwrapping algorithm is proposed. First, by applying a threshold to the modulation map, the valid region (i.e., the interference region) is divided into the doubtful region (called the target region during the inpainting period) and the reasonable one (the source region). The wrapped phase of the doubtful region is thought to be unreliable, and the data are abandoned temporarily. Using the region-filling image-inpainting method, the blank target region is filled with new data, while nothing is changed in the source region. A new wrapped phase map is generated, and then it is unwrapped with the quality-guided phase unwrapping algorithm. Finally, a postprocessing operation is proposed for the final result. Experimental results have shown that the performance of the proposed algorithm is effective. PMID:22614426

  2. High image quality sub 100 picosecond gated framing camera development

    SciTech Connect

    Price, R.H.; Wiedwald, J.D.

    1983-11-17

    A major challenge for laser fusion is the study of the symmetry and hydrodynamic stability of imploding fuel capsules. Framed x-radiographs of 10-100 ps duration, excellent image quality, minimum geometrical distortion (< 1%), dynamic range greater than 1000, and more than 200 x 200 pixels are required for this application. Recent progress on a gated proximity focused intensifier which meets these requirements is presented.

  3. Incorporating detection tasks into the assessment of CT image quality

    NASA Astrophysics Data System (ADS)

    Scalzetti, E. M.; Huda, W.; Ogden, K. M.; Khan, M.; Roskopf, M. L.; Ogden, D.

    2006-03-01

    The purpose of this study was to compare traditional and task dependent assessments of CT image quality. Chest CT examinations were obtained with a standard protocol for subjects participating in a lung cancer-screening project. Images were selected for patients whose weight ranged from 45 kg to 159 kg. Six ABR certified radiologists subjectively ranked these images using a traditional six-point ranking scheme that ranged from 1 (inadequate) to 6 (excellent). Three subtle diagnostic tasks were identified: (1) a lung section containing a sub-centimeter nodule of ground-glass opacity in an upper lung (2) a mediastinal section with a lymph node of soft tissue density in the mediastinum; (3) a liver section with a rounded low attenuation lesion in the liver periphery. Each observer was asked to estimate the probability of detecting each type of lesion in the appropriate CT section using a six-point scale ranging from 1 (< 10%) to 6 (> 90%). Traditional and task dependent measures of image quality were plotted as a function of patient weight. For the lung section, task dependent evaluations were very similar to those obtained using the traditional scoring scheme, but with larger inter-observer differences. Task dependent evaluations for the mediastinal section showed no obvious trend with subject weight, whereas there the traditional score decreased from ~4.9 for smaller subjects to ~3.3 for the larger subjects. Task dependent evaluations for the liver section showed a decreasing trend from ~4.1 for the smaller subjects to ~1.9 for the larger subjects, whereas the traditional evaluation had a markedly narrower range of scores. A task-dependent method of assessing CT image quality can be implemented with relative ease, and is likely to be more meaningful in the clinical setting.

  4. Radiometric Quality Evaluation of INSAT-3D Imager Data

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

    2014-11-01

    INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

  5. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  6. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Brettle, David S.; Treadgold, Laura A.; Sivananthan, Mohan; Davies, Andrew G.

    2015-09-01

    Cardiologists use x-ray image sequences of the moving heart acquired in real-time to diagnose and treat cardiac patients. The amount of radiation used is proportional to image quality; however, exposure to radiation is damaging to patients and personnel. The amount by which radiation dose can be reduced without compromising patient care was determined. For five patient image sequences, increments of computer-generated quantum noise (white + colored) were added to the images, frame by frame using pixel-to-pixel addition, to simulate corresponding increments of dose reduction. The noise adding software was calibrated for settings used in cardiac procedures, and validated using standard objective and subjective image quality measurements. The degraded images were viewed next to corresponding original (not degraded) images in a two-alternative-forced-choice staircase psychophysics experiment. Seven cardiologists and five radiographers selected their preferred image based on visualization of the coronary arteries. The point of subjective equality, i.e., level of degradation where the observer could not perceive a difference between the original and degraded images, was calculated; for all patients the median was 33%±15% dose reduction. This demonstrates that a 33%±15% increase in image noise is feasible without being perceived, indicating potential for 33%±15% dose reduction without compromising patient care.

  7. Image quality, threshold contrast and mean glandular dose in CR mammography

    NASA Astrophysics Data System (ADS)

    Jakubiak, R. R.; Gamba, H. R.; Neves, E. B.; Peixoto, J. E.

    2013-09-01

    In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in both

  8. Image quality, threshold contrast and mean glandular dose in CR mammography.

    PubMed

    Jakubiak, R R; Gamba, H R; Neves, E B; Peixoto, J E

    2013-09-21

    In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in

  9. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  10. Pleiades-Hr Innovative Techniques for Radiometric Image Quality Commissioning

    NASA Astrophysics Data System (ADS)

    Blanchet, G.; Lebeque, L.; Fourest, S.; Latry, C.; Porez-Nadal, F.; Lacherade, S.; Thiebaut, C.

    2012-07-01

    The first Pleiades-HR satellite, part of a constellation of two, has been launched on December 17, 2011. This satellite produces high resolution optical images. In order to achieve good image quality, Pleiades-HR should first undergo an important 6 month commissioning phase period. This phase consists in calibrating and assessing the radiometric and geometric image quality to offer the best images to end users. This new satellite has benefited from technology improvements in various fields which make it stand out from other Earth observation satellites. In particular, its best-in-class agility performance enables new calibration and assessment techniques. This paper is dedicated to presenting these innovative techniques that have been tested for the first time for the Pleiades- HR radiometric commissioning. Radiometric activities concern compression, absolute calibration, detector normalization, and refocusing operations, MTF (Modulation Transfer Function) assessment, signal-to-noise ratio (SNR) estimation, and tuning of the ground processing parameters. The radiometric performances of each activity are summarized in this paper.

  11. Image gathering and digital restoration for fidelity and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1991-01-01

    The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.

  12. ECG-synchronized DSA exposure control: improved cervicothoracic image quality

    SciTech Connect

    Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.

    1984-10-01

    An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.

  13. Effects of characteristics of image quality in an immersive environment

    NASA Technical Reports Server (NTRS)

    Duh, Henry Been-Lirn; Lin, James J W.; Kenyon, Robert V.; Parker, Donald E.; Furness, Thomas A.

    2002-01-01

    Image quality issues such as field of view (FOV) and resolution are important for evaluating "presence" and simulator sickness (SS) in virtual environments (VEs). This research examined effects on postural stability of varying FOV, image resolution, and scene content in an immersive visual display. Two different scenes (a photograph of a fountain and a simple radial pattern) at two different resolutions were tested using six FOVs (30, 60, 90, 120, 150, and 180 deg.). Both postural stability, recorded by force plates, and subjective difficulty ratings varied as a function of FOV, scene content, and image resolution. Subjects exhibited more balance disturbance and reported more difficulty in maintaining posture in the wide-FOV, high-resolution, and natural scene conditions.

  14. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  15. Image quality criteria for wide-field x-ray imaging applications

    NASA Astrophysics Data System (ADS)

    Thompson, Patrick L.; Harvey, James E.

    1999-10-01

    For staring, wide-field applications, such as a solar x-ray imager, the severe off-axis aberrations of the classical Wolter Type-I grazing incidence x-ray telescope design drastically limits the 'resolution' near the solar limb. A specification upon on-axis fractional encircled energy is thus not an appropriate image quality criterion for such wide-angle applications. A more meaningful image quality criterion would be a field-weighted-average measure of 'resolution.' Since surface scattering effects from residual optical fabrication errors are always substantial at these very short wavelengths, the field-weighted-average half- power radius is a far more appropriate measure of aerial resolution. If an ideal mosaic detector array is being used in the focal plane, the finite pixel size provides a practical limit to this system performance. Thus, the total number of aerial resolution elements enclosed by the operational field-of-view, expressed as a percentage of the n umber of ideal detector pixels, is a further improved image quality criterion. In this paper we describe the development of an image quality criterion for wide-field applications of grazing incidence x-ray telescopes which leads to a new class of grazing incidence designs described in a following companion paper.

  16. Evaluation of scatter effects on image quality for breast tomosynthesis

    SciTech Connect

    Wu Gang; Mainprize, James G.; Boone, John M.; Yaffe, Martin J.

    2009-10-15

    Digital breast tomosynthesis uses a limited number (typically 10-20) of low-dose x-ray projections to produce a pseudo-three-dimensional volume tomographic reconstruction of the breast. The purpose of this investigation was to characterize and evaluate the effect of scattered radiation on the image quality for breast tomosynthesis. In a simulation, scatter point spread functions generated by a Monte Carlo simulation method were convolved over the breast projection to estimate the distribution of scatter for each angle of tomosynthesis projection. The results demonstrate that in the absence of scatter reduction techniques, images will be affected by cupping artifacts, and there will be reduced accuracy of attenuation values inferred from the reconstructed images. The effect of x-ray scatter on the contrast, noise, and lesion signal-difference-to-noise ratio (SDNR) in tomosynthesis reconstruction was measured as a function of the tumor size. When a with-scatter reconstruction was compared to one without scatter for a 5 cm compressed breast, the following results were observed. The contrast in the reconstructed central slice image of a tumorlike mass (14 mm in diameter) was reduced by 30%, the voxel value (inferred attenuation coefficient) was reduced by 28%, and the SDNR fell by 60%. The authors have quantified the degree to which scatter degrades the image quality over a wide range of parameters relevant to breast tomosynthesis, including x-ray beam energy, breast thickness, breast diameter, and breast composition. They also demonstrate, though, that even without a scatter rejection device, the contrast and SDNR in the reconstructed tomosynthesis slice are higher than those of conventional mammographic projection images acquired with a grid at an equivalent total exposure.

  17. A new algorithm for integrated image quality measurement based on wavelet transform and human visual system

    NASA Astrophysics Data System (ADS)

    Wang, Haihui

    2006-01-01

    An essential determinant of the value of digital images is their quality. Over the past years, there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet transform and Human visual system. This way the proposed measure differentiates between the random and signal-dependant distortion, which have different effects on human observer. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated for quality evaluation. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.

  18. Cross-layer Energy Optimization Under Image Quality Constraints for Wireless Image Transmissions.

    PubMed

    Yang, Na; Demirkol, Ilker; Heinzelman, Wendi

    2012-01-01

    Wireless image transmission is critical in many applications, such as surveillance and environment monitoring. In order to make the best use of the limited energy of the battery-operated cameras, while satisfying the application-level image quality constraints, cross-layer design is critical. In this paper, we develop an image transmission model that allows the application layer (e.g., the user) to specify an image quality constraint, and optimizes the lower layer parameters of transmit power and packet length, to minimize the energy dissipation in image transmission over a given distance. The effectiveness of this approach is evaluated by applying the proposed energy optimization to a reference ZigBee system and a WiFi system, and also by comparing to an energy optimization study that does not consider any image quality constraint. Evaluations show that our scheme outperforms the default settings of the investigated commercial devices and saves a significant amount of energy at middle-to-large transmission distances. PMID:23508852

  19. Characterization of image quality for 3D scatter-corrected breast CT images

    NASA Astrophysics Data System (ADS)

    Pachon, Jan H.; Shah, Jainil; Tornai, Martin P.

    2011-03-01

    The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.

  20. Assessing and improving cobalt-60 digital tomosynthesis image quality

    NASA Astrophysics Data System (ADS)

    Marsh, Matthew B.; Schreiner, L. John; Kerr, Andrew T.

    2014-03-01

    Image guidance capability is an important feature of modern radiotherapy linacs, and future cobalt-60 units will be expected to have similar capabilities. Imaging with the treatment beam is an appealing option, for reasons of simplicity and cost, but the dose needed to produce cone beam CT (CBCT) images in a Co-60 treatment beam is too high for this modality to be clinically useful. Digital tomosynthesis (DT) offers a quasi-3D image, of sufficient quality to identify bony anatomy or fiducial markers, while delivering a much lower dose than CBCT. A series of experiments were conducted on a prototype Co-60 cone beam imaging system to quantify the resolution, selectivity, geometric accuracy and contrast sensitivity of Co-60 DT. Although the resolution is severely limited by the penumbra cast by the ~2 cm diameter source, it is possible to identify high contrast objects on the order of 1 mm in width, and bony anatomy in anthropomorphic phantoms is clearly recognizable. Low contrast sensitivity down to electron density differences of 3% is obtained, for uniform features of similar thickness. The conventional shift-and-add reconstruction algorithm was compared to several variants of the Feldkamp-Davis-Kress filtered backprojection algorithm result. The Co-60 DT images were obtained with a total dose of 5 to 15 cGy each. We conclude that Co-60 radiotherapy units upgraded for modern conformal therapy could also incorporate imaging using filtered backprojection DT in the treatment beam. DT is a versatile and promising modality that would be well suited to image guidance requirements.

  1. SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes

    SciTech Connect

    Nelson, G

    2015-06-15

    Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth, Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.

  2. 40 CFR 86.610-98 - Compliance with acceptable quality level and passing and failing criteria for Selective...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Light-Duty Vehicles, Light-Duty Trucks, and Heavy-Duty Vehicles § 86.610-98 Compliance with acceptable... when the decision is made on the last vehicle required to make a decision under paragraph (c) of...

  3. 40 CFR 86.610-98 - Compliance with acceptable quality level and passing and failing criteria for Selective...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Light-Duty Vehicles, Light-Duty Trucks, and Heavy-Duty Vehicles § 86.610-98 Compliance with acceptable... when the decision is made on the last vehicle required to make a decision under paragraph (c) of...

  4. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  5. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  6. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  7. Image Quality of the Helioseismic and Magnetic Imager (HMI) Onboard the Solar Dynamics Observatory (SDO)

    NASA Technical Reports Server (NTRS)

    Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.

    2011-01-01

    We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.

  8. [Use of Near-Infrared Hyperspectral Images to Differentiate Architectural Coatings with Different Qualities].

    PubMed

    Jiang, Jin-bao; Qiao, Xiao-jun; He, Ru-yan; Tian, Fen-min

    2016-02-01

    Architectural coatings sold in market fall into many categories which mean different models and qualities. The research plans to differentiate different kinds of architectural coatings in quality using hyperspectral technology. Near-Infrared hyperspectral images of four kinds of architectural coatings (in a descending quality order of brand A, B, C, and D) in same color were acquired. The optimal wavelengths were selected at 1283 and 2447 nm to differentiate the four kinds of coatings through ANOVA (Analysis of Variance) method. The band ratio index of R₁₂₈₃/R₂₄₄₇ was built and the results were segmented into the corresponding coatings, and the accuracies of segmentation were compared with that from Maximum Likely Classification (MLC). The results indicated all J-M distances are more than 1.8 except between C and D; the lowest accuracy of 87.54% in segmentation and 95.63% in MLC were both from brand D, and others' accuracies all were over 90% in both ratio index and MLC. Therefore, the ratio index R₁₂₈₃/R₂₄₄₇ could be used to distinguish different kinds of architectural coatings. Also, the research could provide support for identification, quality acceptance, as well as conformity assessment of architectural coatings. PMID:27209735

  9. Comparison of image quality in computed laminography and tomography.

    PubMed

    Xu, Feng; Helfen, Lukas; Baumbach, Tilo; Suhonen, Heikki

    2012-01-16

    In computed tomography (CT), projection images of the sample are acquired over an angular range between 180 to 360 degrees around a rotation axis. A special case of CT is that of limited-angle CT, where some of the rotation angles are inaccessible, leading to artefacts in the reconstrucion because of missing information. The case of flat samples is considered, where the projection angles that are close to the sample surface are either i) completely unavailable or ii) very noisy due to the limited transmission at these angles. Computed laminography (CL) is an imaging technique especially suited for flat samples. CL is a generalization of CT that uses a rotation axis tilted by less than 90 degrees with respect to the incident beam. Thus CL avoids using projections from angles closest to the sample surface. We make a quantitative comparison of the imaging artefacts between CL and limited-angle CT for the case of a parallel-beam geometry. Both experimental and simulated images are used to characterize the effect of the artefacts on the resolution and visible image features. The results indicate that CL has an advantage over CT in cases when the missing angular range is a significant portion of the total angular range. In the case when the quality of the projections is limited by noise, CT allows a better tradeoff between the noise level and the missing angular range. PMID:22274425

  10. Patient dose and image quality from mega-voltage cone beam computed tomography imaging

    SciTech Connect

    Gayou, Olivier; Parda, David S.; Johnson, Mark; Miften, Moyed

    2007-02-15

    The evolution of ever more conformal radiation delivery techniques makes the subject of accurate localization of increasing importance in radiotherapy. Several systems can be utilized including kilo-voltage and mega-voltage cone-beam computed tomography (MV-CBCT), CT on rail or helical tomography. One of the attractive aspects of mega-voltage cone-beam CT is that it uses the therapy beam along with an electronic portal imaging device to image the patient prior to the delivery of treatment. However, the use of a photon beam energy in the mega-voltage range for volumetric imaging degrades the image quality and increases the patient radiation dose. To optimize image quality and patient dose in MV-CBCT imaging procedures, a series of dose measurements in cylindrical and anthropomorphic phantoms using an ionization chamber, radiographic films, and thermoluminescent dosimeters was performed. Furthermore, the dependence of the contrast to noise ratio and spatial resolution of the image upon the dose delivered for a 20-cm-diam cylindrical phantom was evaluated. Depending on the anatomical site and patient thickness, we found that the minimum dose deposited in the irradiated volume was 5-9 cGy and the maximum dose was between 9 and 17 cGy for our clinical MV-CBCT imaging protocols. Results also demonstrated that for high contrast areas such as bony anatomy, low doses are sufficient for image registration and visualization of the three-dimensional boundaries between soft tissue and bony structures. However, as the difference in tissue density decreased, the dose required to identify soft tissue boundaries increased. Finally, the dose delivered by MV-CBCT was simulated using a treatment planning system (TPS), thereby allowing the incorporation of MV-CBCT dose in the treatment planning process. The TPS-calculated doses agreed well with measurements for a wide range of imaging protocols.

  11. Image quality evaluation of breast tomosynthesis with synchrotron radiation

    SciTech Connect

    Malliori, A.; Bliznakova, K.; Speller, R. D.; Horrocks, J. A.; Rigon, L.; Tromba, G.; Pallikarakis, N.

    2012-09-15

    Purpose: This study investigates the image quality of tomosynthesis slices obtained from several acquisition sets with synchrotron radiation using a breast phantom incorporating details that mimic various breast lesions, in a heterogeneous background. Methods: A complex Breast phantom (MAMMAX) with a heterogeneous background and thickness that corresponds to 4.5 cm compressed breast with an average composition of 50% adipose and 50% glandular tissue was assembled using two commercial phantoms. Projection images using acquisition arcs of 24 Degree-Sign , 32 Degree-Sign , 40 Degree-Sign , 48 Degree-Sign , and 56 Degree-Sign at incident energy of 17 keV were obtained from the phantom with the synchrotron radiation for medical physics beamline at ELETTRA Synchrotron Light Laboratory. The total mean glandular dose was set equal to 2.5 mGy. Tomograms were reconstructed with simple multiple projection algorithm (MPA) and filtered MPA. In the latter case, a median filter, a sinc filter, and a combination of those two filters were applied on the experimental data prior to MPA reconstruction. Visual inspection, contrast to noise ratio, contrast, and artifact spread function were the figures of merit used in the evaluation of the visualisation and detection of low- and high-contrast breast features, as a function of the reconstruction algorithm and acquisition arc. To study the benefits of using monochromatic beams, single projection images at incident energies ranging from 14 to 27 keV were acquired with the same phantom and weighted to synthesize polychromatic images at a typical incident x-ray spectrum with W target. Results: Filters were optimised to reconstruct features with different attenuation characteristics and dimensions. In the case of 6 mm low-contrast details, improved visual appearance as well as higher contrast to noise ratio and contrast values were observed for the two filtered MPA algorithms that exploit the sinc filter. These features are better visualized

  12. Taking image quality factor into the OPC model tuning flow

    NASA Astrophysics Data System (ADS)

    Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo

    2007-03-01

    All OPC model builders are in search of a physically realistic model that is adequately calibrated and contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process and wafer data sets are not perfect. Most cases even using the average values of different empirical data sets will still take inaccurate measurements into the model fitting process, which makes the fitting process more time consuming and also may cause losing convergence and stability. The Image quality is one of the most worrisome obstacles faced by next-generation lithography. Nowadays, considerable effort is devoted to enhance the contrast, as well as understanding its impact on devices. It is a persistent problem for 193nm micro-lithography and will carry us for at least three generations, culminating with immersion lithography. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the Normal image log slope (NILS), which can reflect the image quality. Using this approach, we can filter wrong information of the process and make the OPC model more accurate. CalibreWorkbench is the platform we used in this study, which has been proven to have an excellent performance on 0.13um, 90nm and 65nm production and development models setup. Leveraging its automatic optical-tuning function, we practiced the best weighting approach to achieve the most efficient and convergent tuning flow.

  13. Perceptual quality metric of color quantization errors on still images

    NASA Astrophysics Data System (ADS)

    Pefferkorn, Stephane; Blin, Jean-Louis

    1998-07-01

    A new metric for the assessment of color image coding quality is presented in this paper. Two models of chromatic and achromatic error visibility have been investigated, incorporating many aspects of human vision and color perception. The achromatic model accounts for both retinal and cortical phenomena such as visual sensitivity to spatial contrast and orientation. The chromatic metric is based on a multi-channel model of human color vision that is parameterized for video coding applications using psychophysical experiments, assuming that perception of color quantization errors can be assimilated to perception of supra-threshold local color-differences. The final metric is a merging of the chromatic model and the achromatic model which accounts for phenomenon as masking. The metric is tested on 6 real images at 5 quality levels using subjective assessments. The high correlation between objective and subjective scores shows that the described metric accurately rates the rendition of important features of the image such as color contours and textures.

  14. A hyperspectral imaging prototype for online quality evaluation of pickling cucumbers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A hyperspectral imaging prototype was developed for online evaluation of external and internal quality of pickling cucumbers. The prototype had several new, unique features including simultaneous reflectance and transmittance imaging and inline, real time calibration of hyperspectral images of each ...

  15. Image quality: An overview; Proceedings of the Meeting, Arlington, VA, April 9, 10, 1985

    NASA Astrophysics Data System (ADS)

    Granger, E. M.; Baker, L. R.

    1985-12-01

    Various papers on image quality are presented. The subjects discussed include: image quality considerations in transform coding, psychophysical approach to image quality, a decision theory approach to tone reproduction, Fourier analysis of image raggedness, lens performance assessment by image quality criteria, results of preliminary work on objective MRTD measurement, resolution requirements for binarization of line art, and problems of the visual display in flight simulation. Also addressed are: emittance in thermal imaging applications, optical performance requirements for thermal imaging lenses, dynamic motion measurement using digital TV speckle interferometry, quality assurance for borescopes, versatile projector test device, operational MTF for Landsat Thematic Mapper, operational use of color perception to enhance satellite image quality, theoretical bases and measurement of the MTF of integrated image sensors, measurement of the MTF of thermal and other video systems, and underflight calibration of the Landsat Thematic Mapper.

  16. Functional magnetic resonance imaging of awake monkeys: some approaches for improving imaging quality

    PubMed Central

    Chen, Gang; Wang, Feng; Dillenburger, Barbara C.; Friedman, Robert M.; Chen, Li M.; Gore, John C.; Avison, Malcolm J.; Roe, Anna W.

    2011-01-01

    Functional magnetic resonance imaging (fMRI), at high magnetic field strength can suffer from serious degradation of image quality because of motion and physiological noise, as well as spatial distortions and signal losses due to susceptibility effects. Overcoming such limitations is essential for sensitive detection and reliable interpretation of fMRI data. These issues are particularly problematic in studies of awake animals. As part of our initial efforts to study functional brain activations in awake, behaving monkeys using fMRI at 4.7T, we have developed acquisition and analysis procedures to improve image quality with encouraging results. We evaluated the influence of two main variables on image quality. First, we show how important the level of behavioral training is for obtaining good data stability and high temporal signal-to-noise ratios. In initial sessions, our typical scan session lasted 1.5 hours, partitioned into short (<10 minutes) runs. During reward periods and breaks between runs, the monkey exhibited movements resulting in considerable image misregistrations. After a few months of extensive behavioral training, we were able to increase the length of individual runs and the total length of each session. The monkey learned to wait until the end of a block for fluid reward, resulting in longer periods of continuous acquisition. Each additional 60 training sessions extended the duration of each session by 60 minutes, culminating, after about 140 training sessions, in sessions that last about four hours. As a result, the average translational movement decreased from over 500 μm to less than 80 μm, a displacement close to that observed in anesthetized monkeys scanned in a 7 T horizontal scanner. Another major source of distortion at high fields arises from susceptibility variations. To reduce such artifacts, we used segmented gradient-echo echo-planar imaging (EPI) sequences. Increasing the number of segments significantly decreased susceptibility

  17. Image quality in CT: From physical measurements to model observers.

    PubMed

    Verdun, F R; Racine, D; Ott, J G; Tapiovaara, M J; Toroi, P; Bochud, F O; Veldkamp, W J H; Schegerer, A; Bouwman, R W; Giron, I Hernandez; Marshall, N W; Edyvean, S

    2015-12-01

    Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment. PMID:26459319

  18. 75 FR 27548 - Quality GearBox, LLC; Notice of Competing Preliminary Permit Application Accepted for Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-17

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Quality GearBox, LLC; Notice of Competing Preliminary Permit Application..., Quality GearBox, LLC (Quality GearBox) filed an application for a preliminary permit, pursuant to...

  19. SU-C-304-02: Robust and Efficient Process for Acceptance Testing of Varian TrueBeam Linacs Using An Electronic Portal Imaging Device (EPID)

    SciTech Connect

    Yaddanapudi, S; Cai, B; Sun, B; Li, H; Noel, C; Goddu, S; Mutic, S; Harry, T; Pawlicki, T

    2015-06-15

    Purpose: The purpose of this project was to develop a process that utilizes the onboard kV and MV electronic portal imaging devices (EPIDs) to perform rapid acceptance testing (AT) of linacs in order to improve efficiency and standardize AT equipment and processes. Methods: In this study a Varian TrueBeam linac equipped with an amorphous silicon based EPID (aSi1000) was used. The conventional set of AT tests and tolerances was used as a baseline guide, and a novel methodology was developed to perform as many tests as possible using EPID exclusively. The developer mode on Varian TrueBeam linac was used to automate the process. In the current AT process there are about 45 tests that call for customer demos. Many of the geometric tests such as jaw alignment and MLC positioning are performed with highly manual methods, such as using graph paper. The goal of the new methodology was to achieve quantitative testing while reducing variability in data acquisition, analysis and interpretation of the results. The developed process was validated on two machines at two different institutions. Results: At least 25 of the 45 (56%) tests which required customer demo can be streamlined and performed using EPIDs. More than half of the AT tests can be fully automated using the developer mode, while others still require some user interaction. Overall, the preliminary data shows that EPID-based linac AT can be performed in less than a day, compared to 2–3 days using conventional methods. Conclusions: Our preliminary results show that performance of onboard imagers is quite suitable for both geometric and dosimetric testing of TrueBeam systems. A standardized AT process can tremendously improve efficiency, and minimize the variability related to third party quality assurance (QA) equipment and the available onsite expertise. Research funding provided by Varian Medical Systems. Dr. Sasa Mutic receives compensation for providing patient safety training services from Varian Medical

  20. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  1. Perceptual image quality assessment: recent progress and trends

    NASA Astrophysics Data System (ADS)

    Lin, Weisi; Narwaria, Manish

    2010-07-01

    Image quality assessment (IQA) is useful in many visual processing systems but challenging to perform in line with the human perception. A great deal of recent research effort has been directed towards IQA. In order to overcome the difficulty and infeasibility of subjective tests in many situations, the aim of such effort is to assess visual quality objectively towards better alignment with the perception of the Human Visual system (HVS). In this work, we review and analyze the recent progress in the areas related to IQA, as well as giving our views whenever possible. Following the recent trends, we discuss the engineering approach in more details, explore the related aspects for feature pooling, and present a case study with machine learning.

  2. Exercise motives and positive body image in physically active college women and men: Exploring an expanded acceptance model of intuitive eating.

    PubMed

    Tylka, Tracy L; Homan, Kristin J

    2015-09-01

    The acceptance model of intuitive eating posits that body acceptance by others facilitates body appreciation and internal body orientation, which contribute to intuitive eating. Two domains of exercise motives (functional and appearance) may also be linked to these variables, and thus were integrated into the model. The model fit the data well for 406 physically active U.S. college students, although some pathways were stronger for women. Body acceptance by others directly contributed to higher functional exercise motives and indirectly contributed to lower appearance exercise motives through higher internal body orientation. Functional exercise motives positively, and appearance exercise motives inversely, contributed to body appreciation. Whereas body appreciation positively, and appearance exercise motives inversely, contributed to intuitive eating for women, only the latter association was evident for men. To benefit positive body image and intuitive eating, efforts should encourage body acceptance by others and emphasize functional and de-emphasize appearance exercise motives. PMID:26281958

  3. Automated techniques for quality assurance of radiological image modalities

    NASA Astrophysics Data System (ADS)

    Goodenough, David J.; Atkins, Frank B.; Dyer, Stephen M.

    1991-05-01

    This paper will attempt to identify many of the important issues for quality assurance (QA) of radiological modalities. It is of course to be realized that QA can span many aspects of the diagnostic decision making process. These issues range from physical image performance levels to and through the diagnostic decision of the radiologist. We will use as a model for automated approaches a program we have developed to work with computed tomography (CT) images. In an attempt to unburden the user, and in an effort to facilitate the performance of QA, we have been studying automated approaches. The ultimate utility of the system is its ability to render in a safe and efficacious manner, decisions that are accurate, sensitive, specific and which are possible within the economic constraints of modern health care delivery.

  4. Sentinel-2 geometric image quality commissioning: first results

    NASA Astrophysics Data System (ADS)

    Languille, F.; Déchoz, C.; Gaudel, A.; Greslou, D.; de Lussy, F.; Trémas, T.; Poulain, V.

    2015-10-01

    In the frame of the Copernicus program of the European Comission, Sentinel-2 will offer multispectral highspatial- resolution optical images over global terrestrial surfaces. In cooperation with ESA, the Centre National d'Etudes Spatiales (CNES) is in charge of the image quality of the project, and will so ensure the CAL/VAL commissioning phase during the months following the launch. Sentinel-2 is a constellation of 2 satellites on a polar sun-synchronous orbit with a revisit time of 5 days (with both satellites), a high field of view - 290km, 13 spectral bands in visible and shortwave infrared, and high spatial resolution - 10m, 20m and 60m. The Sentinel-2 mission offers a global coverage over terrestrial surfaces. The satellites acquire systematically terrestrial surfaces under the same viewing conditions in order to have temporal images stacks. The first satellite has been launched in June 2015. Following the launch, the CAL/VAL commissioning phase will then last during 6 months for geometrical calibration. This paper first provides explanations about Sentinel-2 products delivered with geometric corrections. Then this paper details calibration sites, and the methods used for geometrical parameters calibration and presents the first linked results. The following topics are presented: viewing frames orientation assessment, focal plane mapping for all spectral bands, first results on geolocation assessment, and multispectral registration. There is a systematic images recalibration over a same reference which will be a set of S2 images produced during the 6 months of CAL/VAL. As it takes time to have all needed images, the geolocation performance with ground control points and the multitemporal performance are only first results and will be improved during the last phase of the CAL/VAL. So this paper mainly shows the system performances, the preliminary product performances and the way to perform them.

  5. SAR image quality effects of damped phase and amplitude errors

    NASA Astrophysics Data System (ADS)

    Zelenka, Jerry S.; Falk, Thomas

    The effects of damped multiplicative, amplitude, or phase errors on the image quality of synthetic-aperture radar systems are considered. These types of errors can result from aircraft maneuvers or the mechanical steering of an antenna. The proper treatment of damped multiplicative errors can lead to related design specifications and possibly an enhanced collection capability. Only small, high-frequency errors are considered. Expressions for the average intensity and energy associated with a damped multiplicative error are presented and used to derive graphic results. A typical example is used to show how to apply the results of this effort.

  6. An automated system for numerically rating document image quality

    SciTech Connect

    Cannon, M.; Kelly, P.; Iyengar, S.S.; Brener, N.

    1997-04-01

    As part of the Department of Energy document declassification program, the authors have developed a numerical rating system to predict the OCR error rate that they expect to encounter when processing a particular document. The rating algorithm produces a vector containing scores for different document image attributes such as speckle and touching characters. The OCR error rate for a document is computed from a weighted sum of the elements of the corresponding quality vector. The predicted OCR error rate will be used to screen documents that would not be handled properly with existing document processing products.

  7. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  8. Influence of slice overlap on positron emission tomography image quality

    NASA Astrophysics Data System (ADS)

    McKeown, Clare; Gillen, Gerry; Dempsey, Mary Frances; Findlay, Caroline

    2016-02-01

    PET scans use overlapping acquisition beds to correct for reduced sensitivity at bed edges. The optimum overlap size for the General Electric (GE) Discovery 690 has not been established. This study assesses how image quality is affected by slice overlap. Efficacy of 23% overlaps (recommended by GE) and 49% overlaps (maximum possible overlap) were specifically assessed. European Association of Nuclear Medicine (EANM) guidelines for calculating minimum injected activities based on overlap size were also reviewed. A uniform flood phantom was used to assess noise (coefficient of variation, (COV)) and voxel accuracy (activity concentrations, Bq ml-1). A NEMA (National Electrical Manufacturers Association) body phantom with hot/cold spheres in a background activity was used to assess contrast recovery coefficients (CRCs) and signal to noise ratios (SNR). Different overlap sizes and sphere-to-background ratios were assessed. COVs for 49% and 23% overlaps were 9% and 13% respectively. This increased noise was difficult to visualise on the 23% overlap images. Mean voxel activity concentrations were not affected by overlap size. No clinically significant differences in CRCs were observed. However, visibility and SNR of small, low contrast spheres (⩽13 mm diameter, 2:1 sphere to background ratio) may be affected by overlap size in low count studies if they are located in the overlap area. There was minimal detectable influence on image quality in terms of noise, mean activity concentrations or mean CRCs when comparing 23% overlap with 49% overlap. Detectability of small, low contrast lesions may be affected in low count studies—however, this is a worst-case scenario. The marginal benefits of increasing overlap from 23% to 49% are likely to be offset by increased patient scan times. A 23% overlap is therefore appropriate for clinical use. An amendment to EANM guidelines for calculating injected activities is also proposed which better reflects the effect overlap size has

  9. Radiation Exposure of Ovarian Cancer Patients: Contribution of CT Examinations Performed on Different MDCT (16 and 64 Slices) Scanners and Image Quality Evaluation

    PubMed Central

    Rizzo, Stefania; Origgi, Daniela; Brambilla, Sarah; De Maria, Federica; Foà, Riccardo; Raimondi, Sara; Colombo, Nicoletta; Bellomi, Massimo

    2015-01-01

    Abstract The objective of this study is to compare radiation doses given to ovarian cancer patients by different computed tomographies (CTs) and to evaluate association between doses and subjective and objective image quality. CT examinations included were performed either on a 16-slice CT, equipped with automatic z-axis tube current modulation, or on a 64-slice CT, equipped with z-axis, xy-axis modulation, and adaptive statistical iterative algorithm (ASIR). Evaluation of dose included the following dose descriptors: volumetric CT dose index (CTDIvol), dose length product (DLP), and effective dose (E). Objective image noise was evaluated in abdominal aorta and liver. Subjective image quality was evaluated by assessment of image noise, spatial resolution and diagnostic acceptability. Mean and median CTDIvol, DLP, and E; correlation between CTDIvol and DLP and patients’ weight; comparison of objective noise for the 2 scanners; association between dose descriptors and subjective image quality. The 64-slice CT delivered to patients 24.5% lower dose (P < 0.0001) than 16-slice CT. There was a significant correlation between all dose descriptors (CTDIvol, DLP, E) and weight (P < 0.0001). Objective noise was comparable for the 2 CT scanners. There was a significant correlation between dose descriptors and image noise for the 64-slice CT, and between dose descriptors and spatial resolution for the 16-slice CT. Current dose reduction systems may reduce radiation dose without significantly affecting image quality and diagnostic acceptability of CT exams. PMID:25929914

  10. Radiation exposure of ovarian cancer patients: contribution of CT examinations performed on different MDCT (16 and 64 slices) scanners and image quality evaluation: an observational study.

    PubMed

    Rizzo, Stefania; Origgi, Daniela; Brambilla, Sarah; De Maria, Federica; Foà, Riccardo; Raimondi, Sara; Colombo, Nicoletta; Bellomi, Massimo

    2015-05-01

    The objective of this study is to compare radiation doses given to ovarian cancer patients by different computed tomographies (CTs) and to evaluate association between doses and subjective and objective image quality.CT examinations included were performed either on a 16-slice CT, equipped with automatic z-axis tube current modulation, or on a 64-slice CT, equipped with z-axis, xy-axis modulation, and adaptive statistical iterative algorithm (ASIR). Evaluation of dose included the following dose descriptors: volumetric CT dose index (CTDIvol), dose length product (DLP), and effective dose (E). Objective image noise was evaluated in abdominal aorta and liver. Subjective image quality was evaluated by assessment of image noise, spatial resolution and diagnostic acceptability.Mean and median CTDIvol, DLP, and E; correlation between CTDIvol and DLP and patients' weight; comparison of objective noise for the 2 scanners; association between dose descriptors and subjective image quality.The 64-slice CT delivered to patients 24.5% lower dose (P < 0.0001) than 16-slice CT. There was a significant correlation between all dose descriptors (CTDIvol, DLP, E) and weight (P < 0.0001). Objective noise was comparable for the 2 CT scanners. There was a significant correlation between dose descriptors and image noise for the 64-slice CT, and between dose descriptors and spatial resolution for the 16-slice CT.Current dose reduction systems may reduce radiation dose without significantly affecting image quality and diagnostic acceptability of CT exams. PMID:25929914

  11. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    PubMed

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-01-01

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4 iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (< 5 HU) and target CT number variations (< 1 HU). The radiation

  12. Improve the image quality of orbital 3 T diffusion-weighted magnetic resonance imaging with readout-segmented echo-planar imaging.

    PubMed

    Xu, Xiao-Quan; Liu, Jun; Hu, Hao; Su, Guo-Yi; Zhang, Yu-Dong; Shi, Hai-Bin; Wu, Fei-Yun

    2016-01-01

    The aim of our study is to compare the image quality of readout-segmented echo-planar imaging (rs-EPI) and that of standard single-shot EPI (ss-EPI) in orbital 3 T diffusion-weighted (DW) magnetic resonance (MR) imaging in healthy subjects. Forty-two volunteers underwent two sets of orbital DW imaging scan at a 3 T MR unit, and image quality was assessed qualitatively and quantitatively. As a result, we found that rs-EPI could provide better image quality than standard ss-EPI, while no significant difference was found on the apparent diffusion coefficient between the two sets of DW images. PMID:27317226

  13. W-026, acceptance test report imaging passive/active neutron(IPAN) (submittal {number_sign}54.3 - C3)

    SciTech Connect

    Watson, T.L.

    1997-02-21

    In the Spring of 1996, Site Acceptance Tests were performed for the 2 Imaging Passive/Active Neutron (IPAN) assay systems installed in the WRAP I Facility. This report includes the test documentation and the completed test checklists, with comments and resolutions. All testing was completed, with comments resolved by August 1996.

  14. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  15. A color image quality assessment using a reduced-reference image machine learning expert

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Lebrun, Gilles; Lezoray, Olivier

    2008-01-01

    A quality metric based on a classification process is introduced. The main idea of the proposed method is to avoid the error pooling step of many factors (in frequential and spatial domain) commonly applied to obtain a final quality score. A classification process based on final quality class with respect to the standard quality scale provided by the UIT. Thus, for each degraded color image, a feature vector is computed including several Human Visual System characteristics, such as, contrast masking effect, color correlation, and so on. Selected features are of two kinds: 1) full-reference features and 2) no-reference characteristics. That way, a machine learning expert, providing a final class number is designed.

  16. A novel technique of image quality objective measurement by wavelet analysis throughout the spatial frequency range

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong

    2005-01-01

    An essential determinant of the value of surrogate digital images is their quality. Image quality measurement has become crucial for most image processing applications. Over the past years , there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet analysis throughout the spatial frequency range. This is done by a detailed analysis of an image for a wide range of spatial frequency content, using a combination of modulation transfer function (MTF), brightness, contrast, saturation, sharpness and noise, as a more revealing metric for quality evaluation. A fast lifting wavelet algorithm is developed for computationally efficient spatial frequency analysis, where fine image detail corresponding to high spatial frequencies and image sharpness in regard to lower and mid -range spatial frequencies can be examined and compared accordingly. The wavelet frequency deconstruction is actually to extract the feature of edges in sub-band images. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated in assisting with quality analysis. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.

  17. A novel technique of image quality objective measurement by wavelet analysis throughout the spatial frequency range

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong

    2004-10-01

    An essential determinant of the value of surrogate digital images is their quality. Image quality measurement has become crucial for most image processing applications. Over the past years , there have been many attempts to develop models or metrics for image quality that incorporate elements of human visual sensitivity. However, there is no current standard and objective definition of spectral image quality. This paper proposes a reliable automatic method for objective image quality measurement by wavelet analysis throughout the spatial frequency range. This is done by a detailed analysis of an image for a wide range of spatial frequency content, using a combination of modulation transfer function (MTF), brightness, contrast, saturation, sharpness and noise, as a more revealing metric for quality evaluation. A fast lifting wavelet algorithm is developed for computationally efficient spatial frequency analysis, where fine image detail corresponding to high spatial frequencies and image sharpness in regard to lower and mid -range spatial frequencies can be examined and compared accordingly. The wavelet frequency deconstruction is actually to extract the feature of edges in sub-band images. The technique provides a means to relate the quality of an image to the interpretation and quantification throughout the frequency range, in which the noise level is estimated in assisting with quality analysis. The experimental results of using this method for image quality measurement exhibit good correlation to subjective visual quality assessments.

  18. Measuring saliency in images: which experimental parameters for the assessment of image quality?

    NASA Astrophysics Data System (ADS)

    Fredembach, Clement; Woolfe, Geoff; Wang, Jue

    2012-01-01

    Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a

  19. Image quality degradation and retrieval errors introduced by registration and interpolation of multispectral digital images

    SciTech Connect

    Henderson, B.G.; Borel, C.C.; Theiler, J.P.; Smith, B.W.

    1996-04-01

    Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the Modulation Transfer Function (MTF) and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.

  20. Image quality and stability of image-guided radiotherapy (IGRT) devices: A comparative study

    PubMed Central

    Stock, Markus; Pasler, Marlies; Birkfellner, Wolfgang; Homolka, Peter; Poetter, Richard; Georg, Dietmar

    2010-01-01

    Introduction Our aim was to implement standards for quality assurance of IGRT devices used in our department and to compare their performances with that of a CT simulator. Materials and methods We investigated image quality parameters for three devices over a period of 16 months. A multislice CT was used as a benchmark and results related to noise, spatial resolution, low contrast visibility (LCV) and uniformity were compared with a cone beam CT (CBCT) at a linac and simulator. Results All devices performed well in terms of LCV and, in fact, exceeded vendor specifications. MTF was comparable between CT and linac CBCT. Integral nonuniformity was, on average, 0.002 for the CT and 0.006 for the linac CBCT. Uniformity, LCV and MTF varied depending on the protocols used for the linac CBCT. Contrast-to-noise ratio was an average of 51% higher for the CT than for the linac and simulator CBCT. No significant time trend was observed and tolerance limits were implemented. Discussion Reasonable differences in image quality between CT and CBCT were observed. Further research and development are necessary to increase image quality of commercially available CBCT devices in order for them to serve the needs for adaptive and/or online planning. PMID:19695725

  1. Defining the Utility of Clinically Acceptable Variations in Evidence-Based Practice Guidelines for Evaluation of Quality Improvement Activities.

    ERIC Educational Resources Information Center

    Lescoe-Long, Mary; Long, Michael J.

    1999-01-01

    Examined the usefulness of systematically accounting for acceptable physician variations in guideline application. Review of 141 cases of treatment of acute myocardial infarction in a Canadian hospital show that even seemingly noncontentious guideline protocols do not offer a threshold of variation similar to conventional Continuous Quality…

  2. Effects of diet on carcass quality and consumer taste panel acceptance of intact or castrated hair lambs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Forty hair-type lambs were examined in a 70-d study to determine the effects of gender (castrate; C vs. intact; I) and forage type on carcass traits and sensory acceptability. Lambs were procured from a single source in Missouri and one-half were randomly castrated. Lambs were randomly assigned to t...

  3. Relations between local and global perceptual image quality and visual masking

    NASA Astrophysics Data System (ADS)

    Alam, Md Mushfiqul; Patil, Pranita; Hagan, Martin T.; Chandler, Damon M.

    2015-03-01

    Perceptual quality assessment of digital images and videos are important for various image-processing applications. For assessing the image quality, researchers have often used the idea of visual masking (or distortion visibility) to design image-quality predictors specifically for the near-threshold distortions. However, it is still unknown that while assessing the quality of natural images, how the local distortion visibilities relate with the local quality scores. Furthermore, the summing mechanism of the local quality scores to predict the global quality scores is also crucial for better prediction of the perceptual image quality. In this paper, the local and global qualities of six images and six distortion levels were measured using subjective experiments. Gabor-noise target was used as distortion in the quality-assessment experiments to be consistent with our previous study [Alam, Vilankar, Field, and Chandler, Journal of Vision, 2014], in which the local root-mean-square contrast detection thresholds of detecting the Gabor-noise target were measured at each spatial location of the undistorted images. Comparison of the results of this quality-assessment experiment and the previous detection experiment shows that masking predicted the local quality scores more than 95% correctly above 15 dB threshold within 5% subject scores. Furthermore, it was found that an approximate squared summation of local-quality scores predicted the global quality scores suitably (Spearman rank-order correlation 0:97).

  4. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  5. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system

    PubMed Central

    Wood, T J; Beavis, A W; Saunderson, J R

    2013-01-01

    Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362

  6. Cone beam computed tomography radiation dose and image quality assessments.

    PubMed

    Lofthag-Hansen, Sara

    2010-01-01

    Diagnostic radiology has undergone profound changes in the last 30 years. New technologies are available to the dental field, cone beam computed tomography (CBCT) as one of the most important. CBCT is a catch-all term for a technology comprising a variety of machines differing in many respects: patient positioning, volume size (FOV), radiation quality, image capturing and reconstruction, image resolution and radiation dose. When new technology is introduced one must make sure that diagnostic accuracy is better or at least as good as the one it can be expected to replace. The CBCT brand tested was two versions of Accuitomo (Morita, Japan): 3D Accuitomo with an image intensifier as detector, FOV 3 cm x 4 cm and 3D Accuitomo FPD with a flat panel detector, FOVs 4 cm x 4 cm and 6 cm x 6 cm. The 3D Accuitomo was compared with intra-oral radiography for endodontic diagnosis in 35 patients with 46 teeth analyzed, of which 41 were endodontically treated. Three observers assessed the images by consensus. The result showed that CBCT imaging was superior with a higher number of teeth diagnosed with periapical lesions (42 vs 32 teeth). When evaluating 3D Accuitomo examinations in the posterior mandible in 30 patients, visibility of marginal bone crest and mandibular canal, important anatomic structures for implant planning, was high with good observer agreement among seven observers. Radiographic techniques have to be evaluated concerning radiation dose, which requires well-defined and easy-to-use methods. Two methods: CT dose index (CTDI), prevailing method for CT units, and dose-area product (DAP) were evaluated for calculating effective dose (E) for both units. An asymmetric dose distribution was revealed when a clinical situation was simulated. Hence, the CTDI method was not applicable for these units with small FOVs. Based on DAP values from 90 patient examinations effective dose was estimated for three diagnostic tasks: implant planning in posterior mandible and

  7. Comprehensive model for predicting perceptual image quality of smart mobile devices.

    PubMed

    Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng

    2015-01-01

    An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data. PMID:25967010

  8. No-reference remote sensing image quality assessment using a comprehensive evaluation factor

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Wang, Xu; Li, Xiao; Shao, Xiaopeng

    2014-05-01

    The conventional image quality assessment algorithm, such as Peak Signal to Noise Ratio (PSNR), Mean Square Error(MSE) and structural similarity (SSIM), needs the original image as a reference. It's not applicable to the remote sensing image for which the original image cannot be assumed to be available. In this paper, a No-reference Image Quality Assessment (NRIQA) algorithm is presented to evaluate the quality of remote sensing image. Since blur and noise (including the stripe noise) are the common distortion factors affecting remote sensing image quality, a comprehensive evaluation factor is modeled to assess the blur and noise by analyzing the image visual properties for different incentives combined with SSIM based on human visual system (HVS), and also to assess the stripe noise by using Phase Congruency (PC). The experiment results show this algorithm is an accurate and reliable method for Remote Sensing Image Quality Assessment.

  9. Image quality assessment using multi-method fusion.

    PubMed

    Liu, Tsung-Jung; Lin, Weisi; Kuo, C-C Jay

    2013-05-01

    A new methodology for objective image quality assessment (IQA) with multi-method fusion (MMF) is presented in this paper. The research is motivated by the observation that there is no single method that can give the best performance in all situations. To achieve MMF, we adopt a regression approach. The new MMF score is set to be the nonlinear combination of scores from multiple methods with suitable weights obtained by a training process. In order to improve the regression results further, we divide distorted images into three to five groups based on the distortion types and perform regression within each group, which is called "context-dependent MMF" (CD-MMF). One task in CD-MMF is to determine the context automatically, which is achieved by a machine learning approach. To further reduce the complexity of MMF, we perform algorithms to select a small subset from the candidate method set. The result is very good even if only three quality assessment methods are included in the fusion process. The proposed MMF method using support vector regression is shown to outperform a large number of existing IQA methods by a significant margin when being tested in six representative databases. PMID:23288335

  10. Performance of electronic portal imaging devices (EPIDs) used in radiotherapy: image quality and dose measurements.

    PubMed

    Cremers, F; Frenzel, Th; Kausch, C; Albers, D; Schönborn, T; Schmidt, R

    2004-05-01

    The aim of our study was to compare the image and dosimetric quality of two different imaging systems. The first one is a fluoroscopic electronic portal imaging device (first generation), while the second is based on an amorphous silicon flat-panel array (second generation). The parameters describing image quality include spatial resolution [modulation transfer function (MTF)], noise [noise power spectrum (NPS)], and signal-to-noise transfer [detective quantum efficiency (DQE)]. The dosimetric measurements were compared with ionization chamber as well as with film measurements. The response of the flat-panel imager and the fluoroscopic-optical device was determined performing a two-step Monte Carlo simulation. All measurements were performed in a 6 MV linear accelerator photon beam. The resolution (MTF) of the fluoroscopic device (f 1/2 = 0.3 mm(-1)) is larger than of the amorphous silicon based system (f 1/2 = 0.21 mm(-1)), which is due to the missing backscattered photons and the smaller pixel size. The noise measurements (NPS) show the correlation of neighboring pixels of the amorphous silicon electronic portal imaging device, whereas the NPS of the fluoroscopic system is frequency independent. At zero spatial frequency the DQE of the flat-panel imager has a value of 0.008 (0.8%). Due to the minor frequency dependency this device may be almost x-ray quantum limited. Monte Carlo simulations verified these characteristics. For the fluoroscopic imaging system the DQE at low frequencies is about 0.0008 (0.08%) and degrades with higher frequencies. Dose measurements with the flat-panel imager revealed that images can only be directly converted to portal dose images, if scatter can be neglected. Thus objects distant to the detector (e.g., inhomogeneous dose distribution generated by a modificator) can be verified dosimetrically, while objects close to a detector (e.g., a patient) cannot be verified directly and must be scatter corrected prior to verification. This is

  11. SU-E-P-11: Comparison of Image Quality and Radiation Dose Between Different Scanner System in Routine Abdomen CT

    SciTech Connect

    Liao, S; Wang, Y; Weng, H

    2015-06-15

    Purpose To evaluate image quality and radiation dose of routine abdomen computed tomography exam with the automatic current modulation technique (ATCM) performed in two different brand 64-slice CT scanners in our site. Materials and Methods A retrospective review of routine abdomen CT exam performed with two scanners; scanner A and scanner B in our site. To calculate standard deviation of the portal hepatic level with a region of interest of 12.5 mm x 12.5mm represented to the image noise. The radiation dose was obtained from CT DICOM image information. Using Computed tomography dose index volume (CTDIv) to represented CT radiation dose. The patient data in this study were with normal weight (about 65–75 Kg). Results The standard deviation of Scanner A was smaller than scanner B, the scanner A might with better image quality than scanner B. On the other hand, the radiation dose of scanner A was higher than scanner B(about higher 50–60%) with ATCM. Both of them, the radiation dose was under diagnostic reference level. Conclusion The ATCM systems in modern CT scanners can contribute a significant reduction in radiation dose to the patient. But the reduction by ATCM systems from different CT scanner manufacturers has slightly variation. Whatever CT scanner we use, it is necessary to find the acceptable threshold of image quality with the minimum possible radiation exposure to the patient in agreement with the ALARA principle.

  12. Image quality, tissue heating, and frame rate trade-offs in acoustic radiation force impulse imaging.

    PubMed

    Bouchard, Richard R; Dahl, Jeremy J; Hsu, Stephen J; Palmeri, Mark L; Trahey, Gregg E

    2009-01-01

    The real-time application of acoustic radiation force impulse (ARFI) imaging requires both short acquisition times for a single ARFI image and repeated acquisition of these frames. Due to the high energy of pulses required to generate appreciable radiation force, however, repeated acquisitions could result in substantial transducer face and tissue heating. We describe and evaluate several novel beam sequencing schemes which, along with parallel-receive acquisition, are designed to reduce acquisition time and heating. These techniques reduce the total number of radiation force impulses needed to generate an image and minimize the time between successive impulses. We present qualitative and quantitative analyses of the trade-offs in image quality resulting from the acquisition schemes. Results indicate that these techniques yield a significant improvement in frame rate with only moderate decreases in image quality. Tissue and transducer face heating resulting from these schemes is assessed through finite element method modeling and thermocouple measurements. Results indicate that heating issues can be mitigated by employing ARFI acquisition sequences that utilize the highest track-to-excitation ratio possible. PMID:19213633

  13. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  14. Quality Enhancement and Nerve Fibre Layer Artefacts Removal in Retina Fundus Images by Off Axis Imaging

    SciTech Connect

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul; Li, Yaquin; Tobin Jr, Kenneth William; Chaum, Edward

    2011-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relative low cost, these cameras are employed worldwide by retina specialists to diagnose diabetic retinopathy and other degenerative diseases. Even with relative ease of use, the images produced by these systems sometimes suffer from reflectance artefacts mainly due to the nerve fibre layer (NFL) or other camera lens related reflections. We propose a technique that employs multiple fundus images acquired from the same patient to obtain a single higher quality image without these reflectance artefacts. The removal of bright artefacts, and particularly of NFL reflectance, can have great benefits for the reduction of false positives in the detection of retinal lesions such as exudate, drusens and cotton wool spots by automatic systems or manual inspection. If enough redundant information is provided by the multiple images, this technique also compensates for a suboptimal illumination. The fundus images are acquired in straightforward but unorthodox manner, i.e. the stare point of the patient is changed between each shot but the camera is kept fixed. Between each shot, the apparent shape and position of all the retinal structures that do not exhibit isotropic reflectance (e.g. bright artefacts) change. This physical effect is exploited by our algorithm in order to extract the pixels belonging to the inner layers of the retina, hence obtaining a single artefacts-free image.

  15. Human vision model for the objective evaluation of perceived image quality applied to MRI and image restoration

    NASA Astrophysics Data System (ADS)

    Salem, Kyle A.; Wilson, David L.

    2002-12-01

    We are developing a method to objectively quantify image quality and applying it to the optimization of interventional magnetic resonance imaging (iMRI). In iMRI, images are used for live-time guidance of interventional procedures such as the minimally invasive treatment of cancer. Hence, not only does one desire high quality images, but they must also be acquired quickly. In iMRI, images are acquired in the Fourier domain, or k-space, and this allows many creative ways to image quickly such as keyhole imaging where k-space is preferentially subsampled, yielding suboptimal images at very high frame rates. Other techniques include spiral, radial, and the combined acquisition technique. We have built a perceptual difference model (PDM) that incorporates various components of the human visual system. The PDM was validated using subjective image quality ratings by naive observers and task-based measures defined by interventional radiologists. Using the PDM, we investigated the effects of various imaging parameters on image quality and quantified the degradation due to novel imaging techniques. Results have provided significant information about imaging time versus quality tradeoffs aiding the MR sequence engineer. The PDM has also been used to evaluate other applications such as Dixon fat suppressed MRI and image restoration. In image restoration, the PDM has been used to evaluate the Generalized Minimal Residual (GMRES) image restoration method and to examine the ability to appropriately determine a stopping condition for such iterative methods. The PDM has been shown to be an objective tool for measuring image quality and can be used to determine the optimal methodology for various imaging applications.

  16. Diffusion imaging quality control via entropy of principal direction distribution.

    PubMed

    Farzinfar, Mahshid; Oguz, Ipek; Smith, Rachel G; Verde, Audrey R; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C; Paterson, Sarah; Evans, Alan C; Styner, Martin A

    2013-11-15

    Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, "venetian blind" artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called

  17. Diffusion imaging quality control via entropy of principal direction distribution

    PubMed Central

    Oguz, Ipek; Smith, Rachel G.; Verde, Audrey R.; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L.; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C.; Paterson, Sarah; Evans, Alan C.; Styner, Martin A.

    2013-01-01

    Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here

  18. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min-1 with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels-1.

  19. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization.

    PubMed

    Hutcheson, Joshua A; Majid, Aneeka A; Powless, Amy J; Muldoon, Timothy J

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min(-1) with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels(-1). PMID:26429450

  20. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    SciTech Connect

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  1. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality. PMID:26478956

  2. Beyond image quality: designing engaging interactions with digital products

    NASA Astrophysics Data System (ADS)

    de Ridder, Huib; Rozendaal, Marco C.

    2008-02-01

    Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed visually, but other criteria such as enjoyment, fun, engagement and hedonic quality are emerging. This paper deals with engagement, the intrinsically enjoyable readiness to put more effort into exploring and/or using a product than strictly required, thus attracting and keeping user's attention for a longer period of time. The impact of the experienced richness of an interface, both visually and degree of possible manipulations, was investigated in a series of experiments employing game-like user interfaces. This resulted in the extension of an existing conceptual framework relating engagement to richness by means of two intermediating variables, namely experienced challenge and sense of control. Predictions from this revised framework are evaluated against results of an earlier experiment assessing the ergonomic and hedonic qualities of interactive media. Test material consisted of interactive CD-ROM's containing presentations of three companies for future customers.

  3. Damage and quality assessment in wheat by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Delwiche, Stephen R.; Kim, Moon S.; Dong, Yanhong

    2010-04-01

    Fusarium head blight is a fungal disease that affects the world's small grains, such as wheat and barley. Attacking the spikelets during development, the fungus causes a reduction of yield and grain of poorer processing quality. It also is a health concern because of the secondary metabolite, deoxynivalenol, which often accompanies the fungus. While chemical methods exist to measure the concentration of the mycotoxin and manual visual inspection is used to ascertain the level of Fusarium damage, research has been active in developing fast, optically based techniques that can assess this form of damage. In the current study a near-infrared (1000-1700 nm) hyperspectral image system was assembled and applied to Fusarium-damaged kernel recognition. With anticipation of an eventual multispectral imaging system design, 5 wavelengths were manually selected from a pool of 146 images as the most promising, such that when combined in pairs or triplets, Fusarium damage could be identified. We present the results of two pairs of wavelengths [(1199, 1474 nm) and (1315, 1474 nm)] whose reflectance values produced adequate separation of kernels of healthy appearance (i.e., asymptomatic condition) from kernels possessing Fusarium damage.

  4. Comparison of no-reference image quality assessment machine learning-based algorithms on compressed images

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Saadane, AbdelHakim; Fernandez-Maloigne, Christine

    2015-01-01

    No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications. The main goal of this paper is to perform a comparative study of seven well known no-reference learning-based image quality algorithms. To test the performance of these algorithms, three public databases are used. As a first step, the trial algorithms are compared when no new learning is performed. The second step investigates how the training set influences the results. The Spearman Rank Ordered Correlation Coefficient (SROCC) is utilized to measure and compare the performance. In addition, an hypothesis test is conducted to evaluate the statistical significance of performance of each tested algorithm.

  5. Content-weighted video quality assessment using a three-component image model

    NASA Astrophysics Data System (ADS)

    Li, Chaofeng; Bovik, Alan Conrad

    2010-01-01

    Objective image and video quality measures play important roles in numerous image and video processing applications. In this work, we propose a new content-weighted method for full-reference (FR) video quality assessment using a three-component image model. Using the idea that different image regions have different perceptual significance relative to quality, we deploy a model that classifies image local regions according to their image gradient properties, then apply variable weights to structural similarity image index (SSIM) [and peak signal-to-noise ratio (PSNR)] scores according to region. A frame-based video quality assessment algorithm is thereby derived. Experimental results on the Video Quality Experts Group (VQEG) FR-TV Phase 1 test dataset show that the proposed algorithm outperforms existing video quality assessment methods.

  6. Characterizing image quality in a scanning laser ophthalmoscope with differing pinholes and induced scattered light

    NASA Astrophysics Data System (ADS)

    Hunter, Jennifer J.; Cookson, Christopher J.; Kisilak, Marsha L.; Bueno, Juan M.; Campbell, Melanie C. W.

    2007-05-01

    We quantify the effects on scanning laser ophthalmoscope image quality of controlled amounts of scattered light, confocal pinhole diameter, and age. Optical volumes through the optic nerve head were recorded for a range of pinhole sizes in 12 subjects (19-64 years). The usefulness of various overall metrics in quantifying the changes in fundus image quality is assessed. For registered and averaged images, we calculated signal-to-noise ratio, entropy, and acutance. Entropy was best able to distinguish differing image quality. The optimum confocal pinhole diameter was found to be 50 μm (on the retina), providing improved axial resolution and image quality under all conditions.

  7. Image Quality Performance Measurement of the microPET Focus 120

    NASA Astrophysics Data System (ADS)

    Ballado, Fernando Trejo; López, Nayelli Ortega; Flores, Rafael Ojeda; Ávila-Rodríguez, Miguel A.

    2010-12-01

    The aim of this work is to evaluate the characteristics involved in the image reconstruction of the microPET Focus 120. For this evaluation were used two different phantoms; a miniature hot-rod Derenzo phantom and a National Electrical Manufacturers Association (NEMA) NU4-2008 image quality (IQ) phantom. The best image quality was obtained when using OSEM3D as the reconstruction method reaching a spatial resolution of 1.5 mm with the Derenzo phantom filled with 18F. Image quality test results indicate a superior image quality for the Focus 120 when compared to previous microPET models.

  8. Task-based measures of image quality and their relation to radiation dose and patient risk

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  9. Task-based measures of image quality and their relation to radiation dose and patient risk

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality.

  10. Task-based measures of image quality and their relation to radiation dose and patient risk.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Hoeschen, Christoph; Kupinski, Matthew A; Little, Mark P

    2015-01-21

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  11. SENTINEL-2 image quality and level 1 processing

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Baillarin, Simon; Gascon, Ferran; Hillairet, Emmanuel; Dechoz, Cécile; Lacherade, Sophie; Martimort, Philippe; Spoto, François; Henry, Patrice; Duca, Riccardo

    2009-08-01

    In the framework of the Global Monitoring for Environment and Security (GMES) programme, the European Space Agency (ESA) in partnership with the European Commission (EC) is developing the SENTINEL-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a twin satellites configuration deployed in polar sun-synchronous orbit and is designed to offer a unique combination of systematic global coverage with a wide field of view (290km), a high revisit (5 days at equator with two satellites), a high spatial resolution (10m, 20m and 60 m) and multi-spectral imagery (13 bands in the visible and the short wave infrared spectrum). SENTINEL-2 will ensure data continuity of SPOT and LANDSAT multispectral sensors while accounting for future service evolution. This paper presents the main geometric and radiometric image quality requirements for the mission. The strong multi-spectral and multi-temporal registration requirements constrain the stability of the platform and the ground processing which will automatically refine the geometric physical model through correlation technics. The geolocation of the images will take benefits from a worldwide reference data set made of SENTINEL-2 data strips geolocated through a global space-triangulation. These processing are detailed through the description of the level 1C production which will provide users with ortho-images of Top of Atmosphere reflectances. The huge amount of data (1.4 Tbits per orbit) is also a challenge for the ground processing which will produce at level 1C all the acquired data. Finally we discuss the different geometric (line of sight, focal plane cartography, ...) and radiometric (relative and absolute camera sensitivity) in-flight calibration methods that will take advantage of the on-board sun diffuser and ground targets to answer the severe mission requirements.

  12. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    PubMed Central

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-01-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  13. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases. PMID:27295675

  14. Phantom dosimetry and image quality of i-CAT FLX cone-beam computed tomography

    PubMed Central

    Ludlow, John B.; Walker, Cameron

    2013-01-01

    Introduction Increasing use of cone-beam computed tomography in orthodontics has been coupled with heightened concern with the long-term risks of x-ray exposure in orthodontic populations. An industry response to this has been to offer low-exposure alternative scanning options in newer cone-beam computed tomography models. Methods Effective doses resulting from various combinations of field size, and field location comparing child and adult anthropomorphic phantoms using the recently introduced i-CAT FLX cone-beam computed tomography unit were measured with Optical Stimulated Dosimetry using previously validated protocols. Scan protocols included High Resolution (360° rotation, 600 image frames, 120 kVp, 5 mA, 7.4 sec), Standard (360°, 300 frames, 120 kVp, 5 mA, 3.7 sec), QuickScan (180°, 160 frames, 120 kVp, 5 mA, 2 sec) and QuickScan+ (180°, 160 frames, 90 kVp, 3 mA, 2 sec). Contrast-to-noise ratio (CNR) was calculated as a quantitative measure of image quality for the various exposure options using the QUART DVT phantom. Results Child phantom doses were on average 36% greater than Adult phantom doses. QuickScan+ protocols resulted in significantly lower doses than Standard protocols for child (p=0.0167) and adult (p=0.0055) phantoms. 13×16 cm cephalometric fields of view ranged from 11–85 μSv in the adult phantom and 18–120 μSv in the child for QuickScan+ and Standard protocols respectively. CNR was reduced by approximately 2/3rds comparing QuickScan+ to Standard exposure parameters. Conclusions QuickScan+ effective doses are comparable to conventional panoramic examinations. Significant dose reductions are accompanied by significant reductions in image quality. However, this trade-off may be acceptable for certain diagnostic tasks such as interim assessment of treatment results. PMID:24286904

  15. Investigation of the effect of subcutaneous fat on image quality performance of 2D conventional imaging and tissue harmonic imaging.

    PubMed

    Browne, Jacinta E; Watson, Amanda J; Hoskins, Peter R; Elliott, Alex T

    2005-07-01

    Tissue harmonic imaging (THI) has been reported to improve contrast resolution, tissue differentiation and overall image quality in clinical examinations. However, a study carried out previously by the authors (Brown et al. 2004) found improvements only in spatial resolution and not in contrast resolution or anechoic target detection. This result may have been due to the homogeneity of the phantom. Biologic tissues are generally inhomogeneous and THI has been reported to improve image quality in the presence of large amounts of subcutaneous fat. The aims of the study were to simulate the distortion caused by subcutaneous fat to image quality and thus investigate further the improvements reported in anechoic target detection and contrast resolution performance with THI compared with 2D conventional imaging. In addition, the effect of three different types of fat-mimicking layer on image quality was examined. The abdominal transducer of two ultrasound scanners with 2D conventional imaging and THI were tested, the 4C1 (Aspen-Acuson, Siemens Co., CA, USA) and the C5-2 (ATL HDI 5000, ATL/Philips, Amsterdam, The Netherlands). An ex vivo subcutaneous pig fat layer was used to replicate beam distortion and phase aberration seen clinically in the presence of subcutaneous fat. Three different types of fat-mimicking layers (olive oil, lard and lard with fish oil capsules) were evaluated. The subcutaneous pig fat layer demonstrated an improvement in anechoic target detection with THI compared with 2D conventional imaging, but no improvement was demonstrated in contrast resolution performance; a similar result was found in a previous study conducted by this research group (Brown et al. 2004) while using this tissue-mimicking phantom without a fat layer. Similarly, while using the layers of olive oil, lard and lard with fish oil capsules, improvements due to THI were found in anechoic target detection but, again, no improvements were found for contrast resolution for any of the

  16. Advancing the Quality of Solar Occultation Retrievals through Solar Imaging

    NASA Astrophysics Data System (ADS)

    Gordley, L. L.; Hervig, M. E.; Marshall, B. T.; Russell, J. E.; Bailey, S. M.; Brown, C. W.; Burton, J. C.; Deaver, L. E.; Magill, B. E.; McHugh, M. J.; Paxton, G. J.; Thompson, R. E.

    2008-12-01

    The quality of retrieved profiles (e.g. mixing ratio, temperature, pressure, and extinction) from solar occultation sensors is strongly dependent on the angular fidelity of the measurements. The SOFIE instrument, launched on-board the AIM (Aeronomy of Ice in the Mesosphere) satellite on April 25, 2007, was designed to provide very high precision broadband measurements for the study of Polar Mesospheric Clouds (PMCs), that appear near 83km, just below the high latitude summer mesopause. The SOFIE instrument achieves an unprecedented angular fidelity by imaging the sun on a 2D detector array and tracking the edges with an uncertainty of <0.1 arc seconds. This makes possible retrieved profiles of vertical high resolution mixing ratios, refraction base temperature and pressure from tropopause to lower mesosphere, and transmission with accuracy sufficient to infer cosmic smoke extinction. Details of the approach and recent results will be presented.

  17. Optimizing surface quality of stainless alloys and using a modified ASTM G 48B procedure for acceptance testing

    SciTech Connect

    Maurer, J.R.

    1999-01-01

    The formation of high-temperature oxide scales and Cr-depleted zones on stainless alloys, such as 6% Mo superaustenitic steels, can significantly reduce their corrosion resistance. Effective methods to remove these layers and restore the surface to an optimized condition are detailed. Also, an acceptance test using a modified ASTM G 48B method at 35 C (95 F) for 72 h with a specimen having a crevice, and special corrosion criteria for failure, are described. Comparison of this test method with one using an uncreviced specimen at lower temperatures and for less time is discussed.

  18. The image quality of ion computed tomography at clinical imaging dose levels

    SciTech Connect

    Hansen, David C.; Bassler, Niels; Sørensen, Thomas Sangild; Seco, Joao

    2014-11-01

    Purpose: Accurately predicting the range of radiotherapy ions in vivo is important for the precise delivery of dose in particle therapy. Range uncertainty is currently the single largest contribution to the dose margins used in planning and leads to a higher dose to normal tissue. The use of ion CT has been proposed as a method to improve the range uncertainty and thereby reduce dose to normal tissue of the patient. A wide variety of ions have been proposed and studied for this purpose, but no studies evaluate the image quality obtained with different ions in a consistent manner. However, imaging doses ion CT is a concern which may limit the obtainable image quality. In addition, the imaging doses reported have not been directly comparable with x-ray CT doses due to the different biological impacts of ion radiation. The purpose of this work is to develop a robust methodology for comparing the image quality of ion CT with respect to particle therapy, taking into account different reconstruction methods and ion species. Methods: A comparison of different ions and energies was made. Ion CT projections were simulated for five different scenarios: Protons at 230 and 330 MeV, helium ions at 230 MeV/u, and carbon ions at 430 MeV/u. Maps of the water equivalent stopping power were reconstructed using a weighted least squares method. The dose was evaluated via a quality factor weighted CT dose index called the CT dose equivalent index (CTDEI). Spatial resolution was measured by the modulation transfer function. This was done by a noise-robust fit to the edge spread function. Second, the image quality as a function of the number of scanning angles was evaluated for protons at 230 MeV. In the resolution study, the CTDEI was fixed to 10 mSv, similar to a typical x-ray CT scan. Finally, scans at a range of CTDEI’s were done, to evaluate dose influence on reconstruction error. Results: All ions yielded accurate stopping power estimates, none of which were statistically

  19. Feasibility and Acceptability of a Collaborative Care Intervention To Improve Symptoms and Quality of Life in Chronic Heart Failure: Mixed Methods Pilot Trial

    PubMed Central

    Hooker, Stephanie; Nowels, Carolyn T.; Main, Deborah S.; Meek, Paula; McBryde, Connor; Hattler, Brack; Lorenz, Karl A.; Heidenreich, Paul A.

    2014-01-01

    Abstract Background: People with chronic heart failure (HF) suffer from numerous symptoms that worsen quality of life. The CASA (Collaborative Care to Alleviate Symptoms and Adjust to Illness) intervention was designed to improve symptoms and quality of life by integrating palliative and psychosocial care into chronic care. Objective: Our aim was to determine the feasibility and acceptability of CASA and identify necessary improvements. Methods: We conducted a prospective mixed-methods pilot trial. The CASA intervention included (1) nurse phone visits involving structured symptom assessments and guidelines to alleviate breathlessness, fatigue, pain, or depression; (2) structured phone counseling targeting adjustment to illness and depression if present; and (3) weekly team meetings with a palliative care specialist, cardiologist, and primary care physician focused on medical recommendations to primary care providers (PCPs, physician or nurse practioners) to improve symptoms. Study subjects were outpatients with chronic HF from a Veteran's Affairs hospital (n=15) and a university hospital (n=2). Measurements included feasibility (cohort retention rate, medical recommendation implementation rate, missing data, quality of care) and acceptability (an end-of-study semi-structured participant interview). Results: Participants were male with a median age of 63 years. One withdrew early and there were <5% missing data. Overall, 85% of 87 collaborative care team medical recommendations were implemented. All participants who screened positive for depression were either treated for depression or thought to not have a depressive disorder. In the qualitative interviews, patients reported a positive experience and provided several constructive critiques. Conclusions: The CASA intervention was feasible based on participant enrollment, cohort retention, implementation of medical recommendations, minimal missing data, and acceptability. Several intervention changes were made based

  20. Imaging-based logics for ornamental stone quality chart definition

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Gargiulo, Aldo; Serranti, Silvia; Raspi, Costantino

    2007-02-01

    Ornamental stone products are commercially classified on the market according to several factors related both to intrinsic lythologic characteristics and to their visible pictorial attributes. Sometimes these latter aspects prevail in quality criteria definition and assessment. Pictorial attributes are in any case also influenced by the performed working actions and the utilized tools selected to realize the final stone manufactured product. Stone surface finishing is a critical task because it can contribute to enhance certain aesthetic features of the stone itself. The study was addressed to develop an innovative set of methodologies and techniques able to quantify the aesthetic quality level of stone products taking into account both the physical and the aesthetical characteristics of the stones. In particular, the degree of polishing of the stone surfaces and the presence of defects have been evaluated, applying digital image processing strategies. Morphological and color parameters have been extracted developing specific software architectures. Results showed as the proposed approaches allow to quantify the degree of polishing and to identify surface defects related to the intrinsic characteristics of the stone and/or the performed working actions.

  1. Crowdsourcing quality control for Dark Energy Survey images

    DOE PAGESBeta

    Melchior, P.

    2016-07-01

    We have developed a crowdsourcing web application for image quality controlemployed by the Dark Energy Survey. Dubbed the "DES exposure checker", itrenders science-grade images directly to a web browser and allows users to markproblematic features from a set of predefined classes. Users can also generatecustom labels and thus help identify previously unknown problem classes. Userreports are fed back to hardware and software experts to help mitigate andeliminate recognized issues. We report on the implementation of the applicationand our experience with its over 100 users, the majority of which areprofessional or prospective astronomers but not data management experts. Wediscuss aspects ofmore » user training and engagement, and demonstrate how problemreports have been pivotal to rapidly correct artifacts which would likely havebeen too subtle or infrequent to be recognized otherwise. We conclude with anumber of important lessons learned, suggest possible improvements, andrecommend this collective exploratory approach for future astronomical surveysor other extensive data sets with a sufficiently large user base. We alsorelease open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less

  2. Crowdsourcing quality control for Dark Energy Survey images

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Sheldon, E.; Drlica-Wagner, A.; Rykoff, E. S.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Jarvis, M.; Kuehn, K.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Wester, W.; Zhang, Y.

    2016-07-01

    We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo version at http://des-exp-checker.pmelchior.net.

  3. SU-E-J-157: Improving the Quality of T2-Weighted 4D Magnetic Resonance Imaging for Clinical Evaluation

    SciTech Connect

    Du, D; Mutic, S; Hu, Y; Caruthers, S; Glide-Hurst, C; Low, D

    2014-06-01

    Purpose: To develop an imaging technique that enables us to acquire T2- weighted 4D Magnetic Resonance Imaging (4DMRI) with sufficient spatial coverage, temporal resolution and spatial resolution for clinical evaluation. Methods: T2-weighed 4DMRI images were acquired from a healthy volunteer using a respiratory amplitude triggered T2-weighted Turbo Spin Echo sequence. 10 respiratory states were used to equally sample the respiratory range based on amplitude (0%, 20%i, 40%i, 60%i, 80%i, 100%, 80%e, 60%e, 40%e and 20%e). To avoid frequent scanning halts, a methodology was devised that split 10 respiratory states into two packages in an interleaved manner and packages were acquired separately. Sixty 3mm sagittal slices at 1.5mm in-plane spatial resolution were acquired to offer good spatial coverage and reasonable spatial resolution. The in-plane field of view was 375mm × 260mm with nominal scan time of 3 minutes 42 seconds. Acquired 2D images at the same respiratory state were combined to form the 3D image set corresponding to that respiratory state and reconstructed in the coronal view to evaluate whether all slices were at the same respiratory state. 3D image sets of 10 respiratory states represented a complete 4D MRI image set. Results: T2-weighted 4DMRI image were acquired in 10 minutes which was within clinical acceptable range. Qualitatively, the acquired MRI images had good image quality for delineation purposes. There were no abrupt position changes in reconstructed coronal images which confirmed that all sagittal slices were in the same respiratory state. Conclusion: We demonstrated it was feasible to acquire T2-weighted 4DMRI image set within a practical amount of time (10 minutes) that had good temporal resolution (10 respiratory states), spatial resolution (1.5mm × 1.5mm × 3.0mm) and spatial coverage (60 slices) for future clinical evaluation.

  4. No-reference image quality assessment based on nonsubsample shearlet transform and natural scene statistics

    NASA Astrophysics Data System (ADS)

    Wang, Guan-jun; Wu, Zhi-yong; Yun, Hai-jiao; Cui, Ming

    2016-03-01

    A novel no-reference (NR) image quality assessment (IQA) method is proposed for assessing image quality across multifarious distortion categories. The new method transforms distorted images into the shearlet domain using a non-subsample shearlet transform (NSST), and designs the image quality feature vector to describe images utilizing natural scenes statistical features: coefficient distribution, energy distribution and structural correlation ( SC) across orientations and scales. The final image quality is achieved from distortion classification and regression models trained by a support vector machine (SVM). The experimental results on the LIVE2 IQA database indicate that the method can assess image quality effectively, and the extracted features are susceptive to the category and severity of distortion. Furthermore, our proposed method is database independent and has a higher correlation rate and lower root mean squared error ( RMSE) with human perception than other high performance NR IQA methods.

  5. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    PubMed

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  6. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images

    PubMed Central

    Kim, Kwang-Min; Son, Kilho; Palmore, G. Tayhas R.

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  7. Small molecule specific run acceptance, specific assay operation, and chromatographic run quality assessment: recommendation for best practices and harmonization from the global bioanalysis consortium harmonization teams.

    PubMed

    Woolf, Eric J; McDougall, Stuart; Fast, Douglas M; Andraus, Maristela; Barfield, Matthew; Blackburn, Michael; Gordon, Ben; Hoffman, David; Inoue, Noriko; Marcelin-Jimenez, Gabriel; Flynn, Amy; LeLacheur, Richard; Reuschel, Scott; Santhanam, Ravisankar; Bennett, Patrick; Duncan, Barbara; Hayes, Roger; Lausecker, Berthold; Sharma, Abhishek; Togashi, Kazutaka; Trivedi, Ravi Kumar; Vago, Miguel; White, Stephen; Barton, Hollie; Dunn, John A; Farmen, Raymond H; Heinig, Katja; Holliman, Christopher; Komaba, Junji; Riccio, Maria Francesca; Thomas, Elizabeth

    2014-09-01

    Consensus practices and regulatory guidance for liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS) assays of small molecules are more aligned globally than for any of the other bioanalytical techniques addressed by the Global Bioanalysis Consortium. The three Global Bioanalysis Consortium Harmonization Teams provide recommendations and best practices for areas not yet addressed fully by guidances and consensus for small molecule bioanalysis. Recommendations from all three teams are combined in this report for chromatographic run quality, validation, and sample analysis run acceptance. PMID:24961918

  8. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  9. Optoacoustic imaging quality enhancement based on geometrical super-resolution method

    NASA Astrophysics Data System (ADS)

    He, Hailong; Mandal, Subhamoy; Buehler, Andreas; Deán-Ben, X. Luís.; Razansky, Daniel; Ntziachristos, Vasilis

    2016-03-01

    In optoacoustic imaging, the resolution and image quality in a certain imaging position usually cannot be enhanced without changing the imaging configuration. Post-reconstruction image processing methods offer a new possibility to improve image quality and resolution. We have developed a geometrical super-resolution (GSR) method which uses information from spatially separated frames to enhance resolution and contrast in optoacoustic images. The proposed method acquires several low resolution images from the same object located at different positions inside the imaging plane. Thereafter, it applies an iterative registration algorithm to integrate the information in the acquired set of images to generate a single high resolution image. Herein, we present the method and evaluate its performance in simulation and phantom experiments, and results show that geometrical super-resolution techniques can be a promising alternative to enhance resolution in optoacoustic imaging.

  10. Quality Imaging - Comparison of CR Mammography with Screen-Film Mammography

    SciTech Connect

    Gaona, E.; Azorin Nieto, J.; Iran Diaz Gongora, J. A.; Arreola, M.; Casian Castellanos, G.; Perdigon Castaneda, G. M.; Franco Enriquez, J. G.

    2006-09-08

    The aim of this work is a quality imaging comparison of CR mammography images printed to film by a laser printer with screen-film mammography. A Giotto and Elscintec dedicated mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in screen-film mammography. Four CR mammography units from two different manufacturers and three dedicated x-ray mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in CR mammography. The tests quality image included an assessment of system resolution, scoring phantom images, Artifacts, mean optical density and density difference (contrast). In this study, screen-film mammography with a quality control program offers a significantly greater level of quality image relative to CR mammography images printed on film.

  11. Assessment of image quality and dose calculation accuracy on kV CBCT, MV CBCT, and MV CT images for urgent palliative radiotherapy treatments.

    PubMed

    Held, Mareike; Cremers, Florian; Sneed, Penny K; Braunstein, Steve; Fogh, Shannon E; Nakamura, Jean; Barani, Igor; Perez-Andujar, Angelica; Pouliot, Jean; Morin, Olivier

    2016-01-01

    cases. Best dose calculation results were obtained when the treatment isocenter was near the image isocenter for all machines. A large field of view and immediate image export to the treatment planning system were essential for a smooth workflow and were not provided on all devices. Based on this phantom study, image quality of the studied kV CBCT, MV CBCT, and MV CT on-board imaging devices was sufficient for treatment planning in all tested cases. Treatment plans provided dose calculation accuracies within an acceptable range for simple, urgently planned pal-liative treatments. However, dose calculation accuracy was compromised towards the edges of an image. Feasibility for clinical implementation should be assessed separately and may be complicated by machine specific features. Image artifacts in patient images and the effect on dose calculation accuracy should be assessed in a separate, machine-specific study. PMID:27074487

  12. Full-reference quality estimation for images with different spatial resolutions.

    PubMed

    Demirtas, Ali Murat; Reibman, Amy R; Jafarkhani, Hamid

    2014-05-01

    Multimedia communication is becoming pervasive because of the progress in wireless communications and multimedia coding. Estimating the quality of the visual content accurately is crucial in providing satisfactory service. State of the art visual quality assessment approaches are effective when the input image and reference image have the same resolution. However, finding the quality of an image that has spatial resolution different than that of the reference image is still a challenging problem. To solve this problem, we develop a quality estimator (QE), which computes the quality of the input image without resampling the reference or the input images. In this paper, we begin by identifying the potential weaknesses of previous approaches used to estimate the quality of experience. Next, we design a QE to estimate the quality of a distorted image with a lower resolution compared with the reference image. We also propose a subjective test environment to explore the success of the proposed algorithm in comparison with other QEs. When the input and test images have different resolutions, the subjective tests demonstrate that in most cases the proposed method works better than other approaches. In addition, the proposed algorithm also performs well when the reference image and the test image have the same resolution. PMID:24686279

  13. Correlation of radiologists' image quality perception with quantitative assessment parameters: just-noticeable difference vs. peak signal-to-noise ratios

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khan M.; Siegel, Eliot L.; Reiner, Bruce I.; Johnson, Jeffrey P.

    2005-04-01

    The authors identify a fundamental disconnect between the ways in which industry and radiologists assess and even discuss product performance. What is needed is a quantitative methodology that can assess both subjective image quality and observer task performance. In this study, we propose and evaluate the use of a visual discrimination model (VDM) that assesses just-noticeable differences (JNDs) to serve this purpose. The study compares radiologists' subjective perceptions of image quality of computer tomography (CT) and computed radiography (CR) images with quantitative measures of peak signal-to-noise ratio (PSNR) and JNDs as measured by a VDM. The study included 4 CT and 6 CR studies with compression ratios ranging from lossless to 90:1 (total of 80 sets of images were generated [n = 1,200]). Eleven radiologists reviewed the images and rated them in terms of overall quality and readability and identified images not acceptable for interpretation. Normalized reader scores were correlated with compression, objective PSNR, and mean JND values. Results indicated a significantly higher correlation between observer performance and JND values than with PSNR methods. These results support the use of the VDM as a metric not only for the threshold discriminations for which it was calibrated, but also as a general image quality metric. This VDM is a highly promising, reproducible, and reliable adjunct or even alternative to human observer studies for research or to establish clinical guidelines for image compression, dose reductions, and evaluation of various display technologies.

  14. On the Difference between Seeing and Image Quality: When the Turbulence Outer Scale Enters the Game

    NASA Astrophysics Data System (ADS)

    Martinez, P.; Kolb, J.; Sarazin, M.; Tokovinin, A.

    2010-09-01

    We attempt to clarify the frequent confusion between seeing and image quality for large telescopes. The full width at half maximum of a stellar image is commonly considered to be equal to the atmospheric seeing. However the outer scale of the turbulence, which corresponds to a reduction in the low frequency content of the phase perturbation spectrum, plays a significant role in the improvement of image quality at the focus of a telescope. The image quality is therefore different (and in some cases by a large factor) from the atmospheric seeing that can be measured by dedicated seeing monitors, such as a differential image motion monitor.

  15. A comparison of Image Quality Models and Metrics Predicting Object Detection

    NASA Technical Reports Server (NTRS)

    Rohaly, Ann Marie; Ahumada, Albert J., Jr.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models and metrics for image quality predict image discriminability, the visibility of the difference between a pair of images. Some image quality applications, such as the quality of imaging radar displays, are concerned with object detection and recognition. Object detection involves looking for one of a large set of object sub-images in a large set of background images and has been approached from this general point of view. We find that discrimination models and metrics can predict the relative detectability of objects in different images, suggesting that these simpler models may be useful in some object detection and recognition applications. Here we compare three alternative measures of image discrimination, a multiple frequency channel model, a single filter model, and RMS error.

  16. Quantitative and qualitative image quality analysis of super resolution images from a low cost scanning laser ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Echegaray, Sebastian; Zamora, Gilberto; Soliz, Peter; Bauman, Wendall

    2011-03-01

    The lurking epidemic of eye diseases caused by diabetes and aging will put more than 130 million Americans at risk of blindness by 2020. Screening has been touted as a means to prevent blindness by identifying those individuals at risk. However, the cost of most of today's commercial retinal imaging devices makes their use economically impractical for mass screening. Thus, low cost devices are needed. With these devices, low cost often comes at the expense of image quality with high levels of noise and distortion hindering the clinical evaluation of those retinas. A software-based super resolution (SR) reconstruction methodology that produces images with improved resolution and quality from multiple low resolution (LR) observations is introduced. The LR images are taken with a low-cost Scanning Laser Ophthalmoscope (SLO). The non-redundant information of these LR images is combined to produce a single image in an implementation that also removes noise and imaging distortions while preserving fine blood vessels and small lesions. The feasibility of using the resulting SR images for screening of eye diseases was tested using quantitative and qualitative assessments. Qualitatively, expert image readers evaluated their ability of detecting clinically significant features on the SR images and compared their findings with those obtained from matching images of the same eyes taken with commercially available high-end cameras. Quantitatively, measures of image quality were calculated from SR images and compared to subject-matched images from a commercial fundus imager. Our results show that the SR images have indeed enough quality and spatial detail for screening purposes.

  17. Quality of laboratory studies assessing effects of Bt-proteins on non-target organisms: minimal criteria for acceptability.

    PubMed

    De Schrijver, Adinda; Devos, Yann; De Clercq, Patrick; Gathmann, Achim; Romeis, Jörg

    2016-08-01

    The potential risks that genetically modified plants may pose to non-target organisms and the ecosystem services they contribute to are assessed as part of pre-market risk assessments. This paper reviews the early tier studies testing the hypothesis whether exposure to plant-produced Cry34/35Ab1 proteins as a result of cultivation of maize 59122 is harmful to valued non-target organisms, in particular Arthropoda and Annelida. The available studies were assessed for their scientific quality by considering a set of criteria determining their relevance and reliability. As a case-study, this exercise revealed that when not all quality criteria are met, weighing the robustness of the study and its relevance for risk assessment is not obvious. Applying a worst-case expected environmental concentration of bioactive toxins equivalent to that present in the transgenic crop, confirming exposure of the test species to the test substance, and the use of a negative control were identified as minimum criteria to be met to guarantee sufficiently reliable data. This exercise stresses the importance of conducting studies meeting certain quality standards as this minimises the probability of erroneous or inconclusive results and increases confidence in the results and adds certainty to the conclusions drawn. PMID:26980555

  18. A Multivariate Model for Coastal Water Quality Mapping Using Satellite Remote Sensing Images

    PubMed Central

    Su, Yuan-Fong; Liou, Jun-Jih; Hou, Ju-Chen; Hung, Wei-Chun; Hsu, Shu-Mei; Lien, Yi-Ting; Su, Ming-Daw; Cheng, Ke-Sheng; Wang, Yeng-Fung

    2008-01-01

    This study demonstrates the feasibility of coastal water quality mapping using satellite remote sensing images. Water quality sampling campaigns were conducted over a coastal area in northern Taiwan for measurements of three water quality variables including Secchi disk depth, turbidity, and total suspended solids. SPOT satellite images nearly concurrent with the water quality sampling campaigns were also acquired. A spectral reflectance estimation scheme proposed in this study was applied to SPOT multispectral images for estimation of the sea surface reflectance. Two models, univariate and multivariate, for water quality estimation using the sea surface reflectance derived from SPOT images were established. The multivariate model takes into consideration the wavelength-dependent combined effect of individual seawater constituents on the sea surface reflectance and is superior over the univariate model. Finally, quantitative coastal water quality mapping was accomplished by substituting the pixel-specific spectral reflectance into the multivariate water quality estimation model.

  19. Sentinel-2 radiometric image quality commissioning: first results

    NASA Astrophysics Data System (ADS)

    Lachérade, S.; Lonjou, V.; Farges, M.; Gamet, P.; Marcq, S.; Raynaud, J.-L.; Trémas, T.

    2015-10-01

    In partnership with the European Commission and in the frame of the Copernicus program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. Sentinel-2 offers a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high spatial resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infrared domains). The first satellite, Sentinel-2A, has been launched in June 2015. The Sentinel-2A Commissioning Phase starts immediately after the Launch and Early Orbit Phase and continues until the In-Orbit Commissioning Review which is planned three months after the launch. The Centre National d'Etudes Spatiales (CNES) supports ESA/ESTEC to insure the Calibration/Validation commissioning phase during the first three months in flight. This paper provides first an overview of the Sentinel-2 system and a description of the products delivered by the ground segment associated to the main radiometric specifications to achieve. Then the paper focuses on the preliminary radiometric results obtained during the in-flight commissioning phase. The radiometric methods and calibration sites used in the CNES image quality center to reach the specifications of the sensor are described. A status of the Sentinel-2A radiometric performances at the end of the first three months after the launch is presented. We will particularly address in this paper the results in term of absolute calibration, pixel to pixel relative sensitivity and MTF estimation.

  20. The Osteoarthritis Initiative (OAI) magnetic resonance imaging quality assurance update

    PubMed Central

    Schneider, E.; NessAiver, M.

    2012-01-01

    Objective Longitudinal quantitative evaluation of cartilage disease requires reproducible measurements over time. We report 8 years of quality assurance (QA) metrics for quantitative magnetic resonance (MR) knee analyses from the Osteoarthritis Initiative (OAI) and show the impact of MR system, phantom, and acquisition protocol changes. Method Key 3 T MR QA metrics, including signal-to-noise, signal uniformity, T2 relaxation times, and geometric distortion, were quantified monthly on two different phantoms using an automated program. Results Over 8 years, phantom measurements showed root-mean-square coefficient-of-variation reproducibility of <0.25% (190.0 mm diameter) and <0.20% (148.0 mm length), resulting in spherical volume reproducibility of <0.35%. T2 relaxation time reproducibility varied from 1.5% to 5.3%; seasonal fluctuations were observed at two sites. All other QA goals were met except: slice thicknesses were consistently larger than nominal on turbo spin echo images; knee coil signal uniformity and signal level varied significantly over time. Conclusions The longitudinal variations for a spherical volume should have minimal impact on the accuracy and reproducibility of cartilage volume and thickness measurements as they are an order of magnitude smaller than reported for either unpaired or paired (repositioning and reanalysis) precision errors. This stability should enable direct comparison of baseline and follow-up images. Cross-comparison of the geometric results from all four OAI sites reveal that the MR systems do not statistically differ and enable results to be pooled. MR QA results identified similar technical issues as previously published. Geometric accuracy stability should have the greatest impact on quantitative analysis of longitudinal change in cartilage volume and thickness precision. PMID:23092792

  1. Spectral CT Imaging of Laryngeal and Hypopharyngeal Squamous Cell Carcinoma: Evaluation of Image Quality and Status of Lymph Nodes

    PubMed Central

    Li, Wei; Wang, Zhongzhou; Pang, Tao; Li, Jun; Shi, Hao; Zhang, Chengqi

    2013-01-01

    Purpose The purpose of this study was to evaluate image quality and status of lymph nodes in laryngeal and hypopharyngeal squamous cell carcinoma (SCC) patients using spectral CT imaging. Materials and Methods Thirty-eight patients with laryngeal and hypopharyngeal SCCs were scanned with spectral CT mode in venous phase. The conventional 140-kVp polychromatic images and one hundred and one sets of monochromatic images were generated ranging from 40 keV to 140 keV. The mean optimal keV was calculated on the monochromatic images. The image quality of the mean optimal keV monochromatic images and polychromatic images was compared with two different methods including a quantitative analysis method and a qualitative analysis method. The HU curve slope (λHU) in the target lymph nodes and the primary lesion was calculated respectively. The ratio of λHU was studied between metastatic and non-metastatic lymph nodes group. Results A total of 38 primary lesions were included. The mean optimal keV was obtained at 55±1.77 keV on the monochromatic images. The image quality evaluated by two different methods including a quantitative analysis method and a qualitative analysis method was obviously increased on monochromatic images than polychromatic images (p<0.05). The ratio of λHU between metastatic and non-metastatic lymph nodes was significantly different in the venous phase images (p<0.05). Conclusion The monochromatic images obtained with spectral CT can be used to improve the image quality of laryngeal and hypopharyngeal SCC and the N-staging accuracy. The quantitative ratio of λHU may be helpful for differentiating between metastatic and non-metastatic cervical lymph nodes. PMID:24386214

  2. Agreement between objective and subjective assessment of image quality in ultrasound abdominal aortic aneurism screening

    PubMed Central

    Wolstenhulme, S; Keeble, C; Moore, S; Evans, J A

    2015-01-01

    Objective: To investigate agreement between objective and subjective assessment of image quality of ultrasound scanners used for abdominal aortic aneurysm (AAA) screening. Methods: Nine ultrasound scanners were used to acquire longitudinal and transverse images of the abdominal aorta. 100 images were acquired per scanner from which 5 longitudinal and 5 transverse images were randomly selected. 33 practitioners scored 90 images blinded to the scanner type and subject characteristics and were required to state whether or not the images were of adequate diagnostic quality. Odds ratios were used to rank the subjective image quality of the scanners. For objective testing, three standard test objects were used to assess penetration and resolution and used to rank the scanners. Results: The subjective diagnostic image quality was ten times greater for the highest ranked scanner than for the lowest ranked scanner. It was greater at depths of <5.0 cm (odds ratio, 6.69; 95% confidence interval, 3.56, 12.57) than at depths of 15.1–20.0 cm. There was a larger range of odds ratios for transverse images than for longitudinal images. No relationship was seen between subjective scanner rankings and test object scores. Conclusion: Large variation was seen in the image quality when evaluated both subjectively and objectively. Objective scores did not predict subjective scanner rankings. Further work is needed to investigate the utility of both subjective and objective image quality measurements. Advances in knowledge: Ratings of clinical image quality and image quality measured using test objects did not agree, even in the limited scenario of AAA screening. PMID:25494526

  3. Objective Image-Quality Assessment for High-Resolution Photospheric Images by Median Filter-Gradient Similarity

    NASA Astrophysics Data System (ADS)

    Deng, Hui; Zhang, Dandan; Wang, Tianyu; Ji, Kaifan; Wang, Feng; Liu, Zhong; Xiang, Yongyuan; Jin, Zhenyu; Cao, Wenda

    2015-05-01

    All next-generation ground-based and space-based solar telescopes require a good quality-assessment metric to evaluate their imaging performance. In this paper, a new image quality metric, the median filter-gradient similarity (MFGS) is proposed for photospheric images. MFGS is a no-reference/blind objective image-quality metric (IQM) by a measurement result between 0 and 1 and has been performed on short-exposure photospheric images captured by the New Vacuum Solar Telescope (NVST) of the Fuxian Solar Observatory and by the Solar Optical Telescope (SOT) onboard the Hinode satellite, respectively. The results show that (1) the measured value of the MFGS changes monotonically from 1 to 0 with degradation of image quality; (2) there exists a linear correlation between the measured values of the MFGS and the root-mean-square contrast (RMS-contrast) of the granulation; (3) the MFGS is less affected by the image contents than the granular RMS-contrast. Overall, the MFGS is a good alternative for the quality assessment of photospheric images.

  4. Determination of pork quality attributes using hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Qiao, Jun; Wang, Ning; Ngadi, M. O.; Gunenc, Aynur

    2005-11-01

    Meat grading has always been a research topic because of large variations among meat products. Many subjective assessment methods with poor repeatability and tedious procedures are still widely used in meat industry. In this study, a hyperspectral-imaging-based technique was developed to achieve fast, accurate, and objective determination of pork quality attributes. The system was able to extract the spectral and spatial characteristics for simultaneous determination of drip loss and pH in pork meat. Two sets of six significant feature wavelengths were selected for predicting the drip loss (590, 645, 721, 752, 803 and 850 nm) and pH (430, 448, 470, 890, 980 and 999 nm). Two feed-forward neural network models were developed. The results showed that the correlation coefficient (r) between the predicted and actual drip loss and pH were 0.71, and 0.58, respectively, by Model 1 and 0.80 for drip loss and 0.67 for pH by Model 2. The color levels of meat samples were also mapped successfully based on a digitalized Meat Color Standard.

  5. Standardized methods for assessing the imaging quality of intraocular lenses

    NASA Astrophysics Data System (ADS)

    Norrby, N. E. Sverker

    1995-11-01

    The relative merits of three standardized methods for assessing the imaging quality of intraocular lenses are discussed based on theoretical modulation-transfer-function calculations. The standards are ANSI Z80.7 1984 from the American National Standards Institute, now superseded by ANSI Z80.7 1994, and the proposed ISO 11979-2 from the International Organization for Standardization. They entail different test 60% resolution efficiency in air, 70% resolutionefficiency in aqueous humor, and 0.43 modulation at 100 line pairs/mm in a model eye. The ISO working group found that the latter corresponds to 60% resolution efficiency in air in a ring test among eight laboratories on a sample of 39 poly(methyl) methacrylate lenses and four silicone lenses spanning the power (in aqueous humor) range of 10-30 D. In both ANSI Z80.7 1994 and ISO 11979-2, a 60% resolution efficiency in air remains an optional approval limit. It is concluded that the ISO configuration is preferred, because it puts the intraocular lens into the context of the optics of the eye. Note that the ISO standard is tentative and is currently being voted on.

  6. Standardized methods for assessing the imaging quality of intraocular lenses.

    PubMed

    Norrby, N E

    1995-11-01

    The relative merits of three standardized methods for assessing the imaging quality of intraocular lenses are discussed based on theoretical modulation-transfer-function calculations. The standards are ANSI Z80.7 1984 from the American National Standards Institute, now superseded by ANSI Z80.7 1994, and the proposed ISO 11979-2 from the International Organization for Standardization. They entail different test configurations and approval limits, respectively: 60% resolution efficiency in air, 70% resolution efficiency in aqueous humor, and 0.43 modulation at 100 line pairs/mm in a model eye. The ISO working group found that the latter corresponds to 60% resolution efficiency in air in a ring test among eight laboratories on a sample of 39 poly(methyl) methacrylate lenses and four silicone lenses spanning the power (in aqueous humor) range of 10-30 D. In both ANSI Z80.7 1994 and ISO 11979-2, a 60% resolution efficiency in air remains an optional approval limit. It is concluded that the ISO configuration is preferred, because it puts the intraocular lens into the context of the optics of the eye. Note that the ISO standard is tentative and is currently being voted on. PMID:21060604

  7. High-quality remote interactive imaging in the operating theatre

    NASA Astrophysics Data System (ADS)

    Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan

    2009-02-01

    We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.

  8. Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images

    NASA Astrophysics Data System (ADS)

    Deng, Yuanbo; Chu, Daping

    2016-03-01

    The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.

  9. An innovative method for the preparation of mum (Thai fermented sausages) with acceptable technological quality and extended shelf-life.

    PubMed

    Wanangkarn, Amornrat; Liu, Deng-Cheng; Swetwiwathana, Adisorn; Tan, Fa-Jui

    2012-11-15

    Freshly-manufactured mum sausages were assigned to two processing methods (process I: stored at ∼30 °C for 14 days; process II: stored at ∼30 °C for three days, vacuum-packaged, and stored at 4 °C until day 28). Physicochemical, microbial, textural, and sensory properties of samples were analysed. The results showed that dehydration was more intense in process I samples, and resulted in lower moisture content and water activity. Significant decreases in pH values, and increases in lactic acid were observed in both samples by day 3. The total microflora and lactic acid bacteria counts increased rapidly during the fermentation and then decreased while the Enterobacteriaceae counts decreased steadily. Too much dehydration resulted in tough textures and unacceptable sensory qualities for process I samples. In conclusion, after three days of fermentation, with vacuum-packaging, ripening and storage at 4 °C up to 28 days, it is possible to produce mum sausages with better qualities and an extended shelf life. PMID:22868122

  10. Analysing demand for environmental quality: a willingness to pay/accept study in the province of Siena (Italy).

    PubMed

    Basili, Marcello; Di Matteo, Massimo; Ferrini, Silvia

    2006-01-01

    The province of Siena, Italy, enacted a new garbage plan (NGP) with the objective of increasing separate waste collection (SWC), shutting down six landfills and increasing incineration. The aim of the paper is to evaluate costs and benefits of the NGP. The hypothesis is that willingness to pay (WTP) should reflect the value to the community of having better environmental quality, according to the Contingent Valuation literature. The paper reports the results of a contingent valuation (CV). The sample was divided into two subsets: firms and households. Through the information gathered via a detailed questionnaire, parametric and non-parametric estimates were elaborated to analyse the WTP of the population for the benefits flowing from increased SWC, increased incineration and shutting down of landfills. These values were expressed as a share of the tax actually paid. Although a small subset of firms and households valued increasing incineration less positively, requesting compensation, on the whole interviewees (with large differences between firms and households) had a net positive WTP for the provisions included in NGP. Parametric estimation procedures enabled us to analyse the economic as well as social and demographic factors affecting these results. These elements are useful for computing a value for the waste charge that also reflects external effects. Finally, we estimated household income elasticity of WTP for the increase in SWC and found less than one: environmental quality is not a luxury good. PMID:16387237

  11. [Image quality evaluation of new image reconstruction methods applying the iterative reconstruction].

    PubMed

    Takata, Tadanori; Ichikawa, Katsuhiro; Hayashi, Hiroyuki; Mitsui, Wataru; Sakuta, Keita; Koshida, Haruka; Yokoi, Tomohiro; Matsubara, Kousuke; Horii, Jyunsei; Iida, Hiroji

    2012-01-01

    The purpose of this study was to evaluate the image quality of an iterative reconstruction method, the iterative reconstruction in image space (IRIS), which was implemented in a 128-slices multi-detector computed tomography system (MDCT), Siemens Somatom Definition Flash (Definition). We evaluated image noise by standard deviation (SD) as many researchers did before, and in addition, we measured modulation transfer function (MTF), noise power spectrum (NPS), and perceptual low-contrast detectability using a water phantom including a low-contrast object with a 10 Hounsfield unit (HU) contrast, to evaluate whether the noise reduction of IRIS was effective. The SD and NPS were measured from the images of a water phantom. The MTF was measured from images of a thin metal wire and a bar pattern phantom with the bar contrast of 125 HU. The NPS of IRIS was lower than that of filtered back projection (FBP) at middle and high frequency regions. The SD values were reduced by 21%. The MTF of IRIS and FBP measured by the wire phantom coincided precisely. However, for the bar pattern phantom, the MTF values of IRIS at 0.625 and 0.833 cycle/mm were lower than those of FBP. Despite the reduction of the SD and the NPS, the low-contrast detectability study indicated no significant difference between IRIS and FBP. From these results, it was demonstrated that IRIS had the noise reduction performance with exact preservation for high contrast resolution and slight degradation of middle contrast resolution, and could slightly improve the low contrast detectability but with no significance. PMID:22516592

  12. Using a NPWE model observer to assess suitable image quality for a digital mammography quality assurance programme.

    PubMed

    Monnin, P; Bochud, F O; Verdun, F R

    2010-01-01

    A method of objectively determining imaging performance for a mammography quality assurance programme for digital systems was developed. The method is based on the assessment of the visibility of a spherical microcalcification of 0.2 mm using a quasi-ideal observer model. It requires the assessment of the spatial resolution (modulation transfer function) and the noise power spectra of the systems. The contrast is measured using a 0.2-mm thick Al sheet and Polymethylmethacrylate (PMMA) blocks. The minimal image quality was defined as that giving a target contrast-to-noise ratio (CNR) of 5.4. Several evaluations of this objective method for evaluating image quality in mammography quality assurance programmes have been considered on computed radiography (CR) and digital radiography (DR) mammography systems. The measurement gives a threshold CNR necessary to reach the minimum standard image quality required with regards to the visibility of a 0.2-mm microcalcification. This method may replace the CDMAM image evaluation and simplify the threshold contrast visibility test used in mammography quality. PMID:20395413

  13. Accept, distract, or reframe? An exploratory experimental comparison of strategies for coping with intrusive body image thoughts in anorexia nervosa and body dysmorphic disorder.

    PubMed

    Hartmann, Andrea S; Thomas, Jennifer J; Greenberg, Jennifer L; Rosenfield, Elizabeth H; Wilhelm, Sabine

    2015-02-28

    Negative body image is the hallmark of anorexia nervosa (AN) and body dysmorphic disorder (BDD). One aspect of body image, appearance-related thoughts, have shown to be a major contributor to relapse, thus further investigation of successful treatment strategies targeting these maladaptive thoughts are warranted. The present study tested an acceptance/mindfulness (AC), a cognitive restructuring (CR), and a distraction strategy with regard to their short-term effectiveness of reducing the frequency of thought occurrence and associated outcomes in participants with AN (n=20), BDD (n=21), and healthy controls (HC; n=22). Although all strategies led to a significant reduction of thought frequency, there was no group × strategy interaction effect in their reduction. Positive affect increased in the BDD group through the AC strategy, but decreased in healthy controls. Acceptance of the thought increased in the CR strategy in AN, whereas that strategy seemed to work least for BDD. Healthy controls showed most acceptance when using distraction. Taken together, the study suggests that all strategies might have their benefits and that it might be worthwhile further investigating differential indication of the strategies with regard to diagnosis and individual factors. PMID:25530419

  14. Prolonged Length of Stay Is Not an Acceptable Alternative to Coded Complications in Assessing Hospital Quality in Elective Joint Arthroplasty.

    PubMed

    Lyman, Stephen; Fields, Kara G; Nocon, Allina A; Ricciardi, Benjamin F; Boettner, Friedrich

    2015-11-01

    We sought to determine if prolonged length of stay (pLOS) is an accurate measure of quality in total hip and knee arthroplasty (THA and TKA). Coded complications and pLOS for 5967 TKA and 4518 THA patients in our hospital discharged between 2009 and 2011 were analyzed. Of 727 patients with pLOS, only 170 also had a complication, yielding a sensitivity of 41.4% (95% CI: 36.7, 46.2) with a positive predictive value (PPV) of just 23.4% (95% CI: 20.3, 26.4). Specificity (94.5% [95% CI: 94.0, 94.9]) and negative predictive value (NPV) (97.5% [95% CI: 97.2, 97.8]) were high, due to the large number of patients without complications or pLOS. This suggests that risk-adjusted pLOS is an inadequate measure of patient safety in primary THA and TKA. PMID:26059501

  15. Outpatient treatment of low-risk venous thromboembolism with monotherapy oral anticoagulation: patient quality of life outcomes and clinician acceptance

    PubMed Central

    Kline, Jeffrey A; Kahler, Zachary P; Beam, Daren M

    2016-01-01

    Background Oral monotherapy anticoagulation has facilitated home treatment of venous thromboembolism (VTE) in outpatients. Objectives The aim of this study was to measure efficacy, safety, as well as patient and physician perceptions produced by a protocol that selected VTE patients as low-risk patients by the Hestia criteria, and initiated home anticoagulation with an oral factor Xa antagonist. Methods Patients were administered the Venous Insufficiency Epidemiological and Economic Study Quality of life/Symptoms questionnaire [VEINEs QoL/Sym] and the physical component summary [PCS] from the Rand 36-Item Short Form Health Survey [SF36]). The primary outcomes were VTE recurrence and hemorrhage at 30 days. Secondary outcomes compared psychometric test scores between patients with deep vein thrombosis (DVT) to those with pulmonary embolism (PE). Patient perceptions were abstracted from written comments and physician perceptions specific to PE outpatient treatment obtained from structured survey. Results From April 2013 to September 2015, 253 patients were treated, including 67 with PE. Within 30 days, 2/253 patients had recurrent DVT and 2/253 had major hemorrhage; all four had DVT at enrollment. The initial PCS scores did not differ between DVT and PE patients (37.2±13.9 and 38.0±12.1, respectively) and both DVT and PE patients had similar improvement over the treatment period (42.2±12.9 and 43.4±12.7, respectively), consistent with prior literature. The most common adverse event was menorrhagia, present in 15% of women. Themes from patient-written responses reflected satisfaction with increased autonomy. Physicians’ (N=116) before-to-after protocol comfort level with home treatment of PE increased 48% on visual analog scale. Conclusion Hestia-negative VTE patients treated with oral monotherapy at home had low rates of VTE recurrence and bleeding, as well as quality of life measurements similar to prior reports. PMID:27143861

  16. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  17. Comparison of image compression techniques for high quality based on properties of visual perception

    NASA Astrophysics Data System (ADS)

    Algazi, V. Ralph; Reed, Todd R.

    1991-12-01

    The growing interest and importance of high quality imaging has several roots: Imaging and graphics, or more broadly multimedia, as the predominant means of man-machine interaction on computers, and the rapid maturing of advanced television technology. Because of their economic importance, proposed advanced television standards are being discussed and evaluated for rapid adoption. These advanced standards are based on well known image compression techniques, used for very low bit rate video communications as well. In this paper, we examine the expected improvement in image quality that advanced television and imaging techniques should bring about. We then examine and discuss the data compression techniques which are commonly used, to determine if they are capable of providing the achievable gain in quality, and to assess some of their limitations. We also discuss briefly the potential of these techniques for very high quality imaging and display applications, which extend beyond the range of existing and proposed television standards.

  18. Evaluation of image quality of MRI data for brain tumor surgery

    NASA Astrophysics Data System (ADS)

    Heckel, Frank; Arlt, Felix; Geisler, Benjamin; Zidowitz, Stephan; Neumuth, Thomas

    2016-03-01

    3D medical images are important components of modern medicine. Their usefulness for the physician depends on their quality, though. Only high-quality images allow accurate and reproducible diagnosis and appropriate support during treatment. We have analyzed 202 MRI images for brain tumor surgery in a retrospective study. Both an experienced neurosurgeon and an experienced neuroradiologist rated each available image with respect to its role in the clinical workflow, its suitability for this specific role, various image quality characteristics, and imaging artifacts. Our results show that MRI data acquired for brain tumor surgery does not always fulfill the required quality standards and that there is a significant disagreement between the surgeon and the radiologist, with the surgeon being more critical. Noise, resolution, as well as the coverage of anatomical structures were the most important criteria for the surgeon, while the radiologist was mainly disturbed by motion artifacts.

  19. Perceptual difference paradigm for analyzing image quality of fast MRI techniques

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Salem, Kyle A.; Huo, Donglai; Duerk, Jeffrey L.

    2003-05-01

    We are developing a method to objectively quantify image quality and applying it to the optimization of fast magnetic resonance imaging methods. In MRI, to capture the details of a dynamic process, it is critical to have both high temporal and spatial resolution. However, there is typically a trade-off between the two, making the sequence engineer choose to optimize imaging speed or spatial resolution. In response to this problem, a number of different fast MRI techniques have been proposed. To evaluate different fast MRI techniques quantitatively, we use a perceptual difference model (PDM) that incorporates various components of the human visual system. The PDM was validated using subjective image quality ratings by naive observers and task-based measures as defined by radiologists. Using the PDM, we investigated the effects of various imaging parameters on image quality and quantified the degradation due to novel imaging techniques including keyhole, keyhole Dixon fat suppression, and spiral imaging. Results have provided significant information about imaging time versus quality tradeoffs aiding the MR sequence engineer. The PDM has been shown to be an objective tool for measuring image quality and can be used to determine the optimal methodology for various imaging applications.

  20. Digital mammography--DQE versus optimized image quality in clinical environment: an on site study

    NASA Astrophysics Data System (ADS)

    Oberhofer, Nadia; Fracchetti, Alessandro; Springeth, Margareth; Moroder, Ehrenfried

    2010-04-01

    The intrinsic quality of the detection system of 7 different digital mammography units (5 direct radiography DR; 2 computed radiography CR), expressed by DQE, has been compared with their image quality/dose performances in clinical use. DQE measurements followed IEC 62220-1-2 using a tungsten test object for MTF determination. For image quality assessment two different methods have been applied: 1) measurement of contrast to noise ratio (CNR) according to the European guidelines and 2) contrast-detail (CD) evaluation. The latter was carried out with the phantom CDMAM ver. 3.4 and the commercial software CDMAM Analyser ver. 1.1 (both Artinis) for automated image analysis. The overall image quality index IQFinv proposed by the software has been validated. Correspondence between the two methods has been shown figuring out a linear correlation between CNR and IQFinv. All systems were optimized with respect to image quality and average glandular dose (AGD) within the constraints of automatic exposure control (AEC). For each equipment, a good image quality level was defined by means of CD analysis, and the corresponding CNR value considered as target value. The goal was to achieve for different PMMA-phantom thicknesses constant image quality, that means the CNR target value, at minimum dose. All DR systems exhibited higher DQE and significantly better image quality compared to CR systems. Generally switching, where available, to a target/filter combination with an x-ray spectrum of higher mean energy permitted dose savings at equal image quality. However, several systems did not allow to modify the AEC in order to apply optimal radiographic technique in clinical use. The best ratio image quality/dose was achieved by a unit with a-Se detector and W anode only recently available on the market.

  1. Lesion insertion in projection domain for computed tomography image quality assessment

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Ma, Chi; Yu, Zhicong; Leng, Shuai; Yu, Lifeng; McCollough, Cynthia

    2015-03-01

    To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way to achieve this objective is to create hybrid images that combine patient images with simulated lesions. Because conventional hybrid images generated in the image-domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Liver lesion models were forward projected according to the geometry of a commercial CT scanner to acquire lesion projections. The lesion projections were then inserted into patient projections (decoded from commercial CT raw data with the assistance of the vendor) and reconstructed to acquire hybrid images. To validate the accuracy of the forward projection geometry, simulated images reconstructed from the forward projections of a digital ACR phantom were compared to physically acquired ACR phantom images. To validate the hybrid images, lesion models were inserted into patient images and visually assessed. Results showed that the simulated phantom images and the physically acquired phantom images had great similarity in terms of HU accuracy and high-contrast resolution. The lesions in the hybrid image had a realistic appearance and merged naturally into the liver background. In addition, the inserted lesion demonstrated reconstruction-parameter-dependent appearance. Compared to conventional image-domain approach, our method enables more realistic hybrid images for image quality assessment.

  2. Improving Appropriateness and Quality in Cardiovascular Imaging: A Review of the Evidence.

    PubMed

    Bhattacharyya, Sanjeev; Lloyd, Guy

    2015-12-01

    High-quality cardiovascular imaging requires a structured process to ensure appropriate patient selection, accurate and reproducible data acquisition, and timely reporting which answers clinical questions and improves patient outcomes. Several guidelines provide frameworks to assess quality. This article reviews interventions to improve quality in cardiovascular imaging, including methods to reduce inappropriate testing, improve accuracy, reduce interobserver variability, and reduce diagnostic and reporting errors. PMID:26628582

  3. Reducing radiation dose without compromising image quality in preoperative perforator flap imaging with CTA using ASIR technology.

    PubMed

    Niumsawatt, Vachara; Debrotwir, Andrew N; Rozen, Warren Matthew

    2014-01-01

    Computed tomographic angiography (CTA) has become a mainstay in preoperative perforator flap planning in the modern era of reconstructive surgery. However, the increased use of CTA does raise the concern of radiation exposure to patients. Several techniques have been developed to decrease radiation dosage without compromising image quality, with varying results. The most recent advance is in the improvement of image reconstruction using an adaptive statistical iterative reconstruction (ASIR) algorithm. We sought to evaluate the image quality of ASIR in preoperative deep inferior epigastric perforator (DIEP) flap surgery, through a direct comparison with conventional filtered back projection (FBP) images. A prospective review of 60 consecutive ASIR and 60 consecutive FBP CTA images using similar protocol (except for radiation dosage) was undertaken, analyzed by 2 independent reviewers. In both groups, we were able to accurately identify axial arteries and their perforators. Subjective analysis of image quality demonstrated no statistically significant difference between techniques. ASIR can thus be used for preoperative imaging with similar image quality to FBP, but with a 60% reduction in radiation delivery to patients. PMID:25058789

  4. Reducing Radiation Dose Without Compromising Image Quality in Preoperative Perforator Flap Imaging With CTA Using ASIR Technology

    PubMed Central

    Niumsawatt, Vachara; Debrotwir, Andrew N.; Rozen, Warren Matthew

    2014-01-01

    Computed tomographic angiography (CTA) has become a mainstay in preoperative perforator flap planning in the modern era of reconstructive surgery. However, the increased use of CTA does raise the concern of radiation exposure to patients. Several techniques have been developed to decrease radiation dosage without compromising image quality, with varying results. The most recent advance is in the improvement of image reconstruction using an adaptive statistical iterative reconstruction (ASIR) algorithm. We sought to evaluate the image quality of ASIR in preoperative deep inferior epigastric perforator (DIEP) flap surgery, through a direct comparison with conventional filtered back projection (FBP) images. A prospective review of 60 consecutive ASIR and 60 consecutive FBP CTA images using similar protocol (except for radiation dosage) was undertaken, analyzed by 2 independent reviewers. In both groups, we were able to accurately identify axial arteries and their perforators. Subjective analysis of image quality demonstrated no statistically significant difference between techniques. ASIR can thus be used for preoperative imaging with similar image quality to FBP, but with a 60% reduction in radiation delivery to patients. PMID:25058789

  5. Exposure reduction and image quality in orthodontic radiology: a review of the literature

    SciTech Connect

    Taylor, T.S.; Ackerman, R.J. Jr.; Hardman, P.K.

    1988-01-01

    This article summarizes the use of rare earth screen technology to achieve high-quality panoramic and cephalometric radiographs with sizable reductions in patient radiation dosage. Collimation, shielding, quality control, and darkroom procedures are reviewed to further reduce patient risk and improve image quality. 34 references.

  6. SU-E-I-21: Dosimetric Characterization and Image Quality Evaluation of the AIRO Mobile CT Scanner

    SciTech Connect

    Weir, V; Zhang, J; Bruner, A

    2015-06-15

    Purpose: The AIRO Mobile CT system was recently introduced which overcomes the limitations from existing CT, CT fluoroscopy, and intraoperative O-arm. With an integrated table and a large diameter bore, the system is suitable for cranial, spine and trauma procedures, making it a highly versatile intraoperative imaging system. This study is to investigate radiation dose and image quality of the AIRO and compared with those from a routine CT scanner. Methods: Radiation dose was measured using a conventional 100mm pencil ionization chamber and CT polymethylmetacrylate (PMMA) body and head phantoms. Image quality was evaluated with a CATPHAN 500 phantom. Spatial resolution, low contrast resolution (CNR), Modulation Transfer Function (MTF), and Normalized Noise Power Spectrum (NNPS) were analyzed. Results: Under identical technique conditions, radiation dose (mGy/mAs) from the AIRO mobile CT system (AIRO) is higher than that from a 64 slice CT scanner. MTFs show that both Soft and Standard filters of the AIRO system lost resolution quickly compared to the Sensation 64 slice CT. With the Standard kernel, the spatial resolutions of the AIRO system are 3lp/cm and 4lp/cm for the body and head FOVs, respectively. NNPSs show low frequency noise due to ring-like artifacts. Due to a higher dose in terms of mGy/mAs at both head and body FOV, CNR of the AIRO system is higher than that of the Siemens scanner. However detectability of the low contrast objects is poorer in the AIRO due to the presence of ring artifacts in the location of the targets. Conclusion: For image guided surgery applications, the AIRO has some advantages over a routine CT scanner due to its versatility, large bore size, and acceptable image quality. Our evaluation of the physical performance helps its future improvements.

  7. Recent developments in hyperspectral imaging for assessment of food quality and safety.

    PubMed

    Huang, Hui; Liu, Li; Ngadi, Michael O

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  8. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety

    PubMed Central

    Huang, Hui; Liu, Li; Ngadi, Michael O.

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  9. An electron beam imaging system for quality assurance in IORT

    NASA Astrophysics Data System (ADS)

    Casali, F.; Rossi, M.; Morigi, M. P.; Brancaccio, R.; Paltrinieri, E.; Bettuzzi, M.; Romani, D.; Ciocca, M.; Tosi, G.; Ronsivalle, C.; Vignati, M.

    2004-01-01

    Intraoperative radiation therapy is a special radiotherapy technique, which enables a high dose of radiation to be given in a single fraction during oncological surgery. The major stumbling block to the large-scale application of the technique is the transfer of the patient, with an open wound, from the operating room to the radiation therapy bunker, with the consequent organisational problems and the increased risk of infection. To overcome these limitations, in the last few years a new kind of linear accelerator, the Novac 7, conceived for direct use in the surgical room, has become available. Novac 7 can deliver electron beams of different energies (3, 5, 7 and 9 MeV), with a high dose rate (up to 20 Gy/min). The aim of this work, funded by ENEA in the framework of a research contract, is the development of an innovative system for on-line measurements of 2D dose distributions and electron beam characterisation, before radiotherapy treatment with Novac 7. The system is made up of the following components: (a) an electron-light converter; (b) a 14 bit cooled CCD camera; (c) a personal computer with an ad hoc written software for image acquisition and processing. The performances of the prototype have been characterised experimentally with different electron-light converters. Several tests have concerned the assessment of the detector response as a function of impulse number and electron beam energy. Finally, the experimental results concerning beam profiles have been compared with data acquired with other dosimetric techniques. The achieved results make it possible to say that the developed system is suitable for fast quality assurance measurements and verification of 2D dose distributions.

  10. Image Quality Analysis of Eyes Undergoing LASER Refractive Surgery

    PubMed Central

    Sarkar, Samrat; Vaddavalli, Pravin Krishna; Bharadwaj, Shrikant R.

    2016-01-01

    Laser refractive surgery for myopia increases the eye’s higher-order wavefront aberrations (HOA’s). However, little is known about the impact of such optical degradation on post-operative image quality (IQ) of these eyes. This study determined the relation between HOA’s and IQ parameters (peak IQ, dioptric focus that maximized IQ and depth of focus) derived from psychophysical (logMAR acuity) and computational (logVSOTF) through-focus curves in 45 subjects (18 to 31yrs) before and 1-month after refractive surgery and in 40 age-matched emmetropic controls. Computationally derived peak IQ and its best focus were negatively correlated with the RMS deviation of all HOA’s (HORMS) (r≥-0.5; p<0.001 for all). Computational depth of focus was positively correlated with HORMS (r≥0.55; p<0.001 for all) and negatively correlated with peak IQ (r≥-0.8; p<0.001 for all). All IQ parameters related to logMAR acuity were poorly correlated with HORMS (r≤|0.16|; p>0.16 for all). Increase in HOA’s after refractive surgery is therefore associated with a decline in peak IQ and a persistence of this sub-standard IQ over a larger dioptric range, vis-à-vis, before surgery and in age-matched controls. This optical deterioration however does not appear to significantly alter psychophysical IQ, suggesting minimal impact of refractive surgery on the subject’s ability to resolve spatial details and their tolerance to blur. PMID:26859302

  11. Visible to SWIR hyperspectral imaging for produce safety and quality evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral imaging techniques, combining the advantages of spectroscopy and imaging, have found wider use in food quality and safety evaluation applications during the past decade. In light of the prevalent use of hyperspectral imaging techniques in the visible to near-infrared (VNIR: 400 -1000 n...

  12. How do we watch images? A case of change detection and quality estimation

    NASA Astrophysics Data System (ADS)

    Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte

    2012-01-01

    The most common tasks in subjective image estimation are change detection (a detection task) and image quality estimation (a preference task). We examined how the task influences the gaze behavior when comparing detection and preference tasks. The eye movements of 16 naïve observers were recorded with 8 observers in both tasks. The setting was a flicker paradigm, where the observers see a non-manipulated image, a manipulated version of the image and again the non-manipulated image and estimate the difference they perceived in them. The material was photographic material with different image distortions and contents. To examine the spatial distribution of fixations, we defined the regions of interest using a memory task and calculated information entropy to estimate how concentrated the fixations were on the image plane. The quality task was faster and needed fewer fixations and the first eight fixations were more concentrated on certain image areas than the change detection task. The bottom-up influences of the image also caused more variation to the gaze behavior in the quality estimation task than in the change detection task The results show that the quality estimation is faster and the regions of interest are emphasized more on certain images compared with the change detection task that is a scan task where the whole image is always thoroughly examined. In conclusion, in subjective image estimation studies it is important to think about the task.

  13. Quality evaluation of adaptive optical image based on DCT and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Li, Junwei; Wang, Jing; Deng, Rong; Dong, Yanbing

    2015-04-01

    The adaptive optical telescopes play a more and more important role in the detection system on the ground, and the adaptive optical images are so many that we need find a suitable method of quality evaluation to choose good quality images automatically in order to save human power. It is well known that the adaptive optical images are no-reference images. In this paper, a new logarithmic evaluation method based on the use of the discrete cosine transform(DCT) and Rényi entropy for the adaptive optical images is proposed. Through the DCT using one or two dimension window, the statistical property of Rényi entropy for images is studied. The different directional Rényi entropy maps of an input image containing different information content are obtained. The mean values of different directional Rényi entropy maps are calculated. For image quality evaluation, the different directional Rényi entropy and its standard deviation corresponding to region of interest is selected as an indicator for the anisotropy of the images. The standard deviation of different directional Rényi entropy is obtained as the quality evaluation value for adaptive optical image. Experimental results show the proposed method that the sorting quality matches well with the visual inspection.

  14. No-Reference Image Quality Assessment for ZY3 Imagery in Urban Areas Using Statistical Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Cui, W. H.; Yang, F.; Wu, Z. C.

    2016-06-01

    More and more high-spatial resolution satellite images are produced with the improvement of satellite technology. However, the quality of images is not always satisfactory for application. Due to the impact of complicated atmospheric conditions and complex radiation transmission process in imaging process the images often suffer deterioration. In order to assess the quality of remote sensing images over urban areas, we proposed a general purpose image quality assessment methods based on feature extraction and machine learning. We use two types of features in multi scales. One is from the shape of histogram the other is from the natural scene statistics based on Generalized Gaussian distribution (GGD). A 20-D feature vector for each scale is extracted and is assumed to capture the RS image quality degradation characteristics. We use SVM to learn to predict image quality scores from these features. In order to do the evaluation, we construct a median scale dataset for training and testing with subjects taking part in to give the human opinions of degraded images. We use ZY3 satellite images over Wuhan area (a city in China) to conduct experiments. Experimental results show the correlation of the predicted scores and the subjective perceptions.

  15. SU-E-I-43: Pediatric CT Dose and Image Quality Optimization

    SciTech Connect

    Stevens, G; Singh, R

    2014-06-01

    Purpose: To design an approach to optimize radiation dose and image quality for pediatric CT imaging, and to evaluate expected performance. Methods: A methodology was designed to quantify relative image quality as a function of CT image acquisition parameters. Image contrast and image noise were used to indicate expected conspicuity of objects, and a wide-cone system was used to minimize scan time for motion avoidance. A decision framework was designed to select acquisition parameters as a weighted combination of image quality and dose. Phantom tests were used to acquire images at multiple techniques to demonstrate expected contrast, noise and dose. Anthropomorphic phantoms with contrast inserts were imaged on a 160mm CT system with tube voltage capabilities as low as 70kVp. Previously acquired clinical images were used in conjunction with simulation tools to emulate images at different tube voltages and currents to assess human observer preferences. Results: Examination of image contrast, noise, dose and tube/generator capabilities indicates a clinical task and object-size dependent optimization. Phantom experiments confirm that system modeling can be used to achieve the desired image quality and noise performance. Observer studies indicate that clinical utilization of this optimization requires a modified approach to achieve the desired performance. Conclusion: This work indicates the potential to optimize radiation dose and image quality for pediatric CT imaging. In addition, the methodology can be used in an automated parameter selection feature that can suggest techniques given a limited number of user inputs. G Stevens and R Singh are employees of GE Healthcare.

  16. A comparison of consumer sensory acceptance, purchase intention, and willingness to pay for high quality United States and Spanish beef under different information scenarios.

    PubMed

    Beriain, M J; Sánchez, M; Carr, T R

    2009-10-01

    Tests were performed to identify variation across consumer evaluation ratings for 2 types of beef (Spanish yearling bull beef and US Choice and Prime beef), using 3 information levels (blind scores; muscle fat content + production conditions; and all production data including geographical origin) and 3 consumer evaluation ratings (hedonic rating, willingness to pay, and purchase intention). Further testing was carried out to assess the extent to which expert evaluations converged with those of untrained consumers. Taste panel tests involving 290 consumers were conducted in Navarra, a region in northern Spain. The beef samples were 20 loins of Pyrenean breed yearling bulls that had been born and raised on private farms located in this Spanish region and 20 strip loins from high quality US beef that ranged from high Choice to average Prime US quality grades. The Spanish beef were slaughtered at 507 +/- 51 kg of BW and 366 +/- 23 d of age. The US beef proved more acceptable to consumers and received greater ratings from the trained panel, with greater scores for juiciness (3.33), tenderness (3.33), flavor (3.46), and fat content (5.83) than for Spanish beef (2.77, 2.70, 3.14, 1.17). The differences in sensory variable rating were more pronounced for the Spanish beef than for the US beef, always increasing with the level of information. The variation in the ratings across different information levels was statistically significant in the case of the Spanish beef, whereas the variation observed in the ratings of the US beef was highly significant in the willingness of consumers to pay a premium. Consumers who appreciated greater quality were also more willing to pay for the additional level of quality. PMID:19542506

  17. Local homogeneity combined with DCT statistics to blind noisy image quality assessment

    NASA Astrophysics Data System (ADS)

    Yang, Lingxian; Chen, Li; Chen, Heping

    2015-03-01

    In this paper a novel method for blind noisy image quality assessment is proposed. First, it is believed that human visual system (HVS) is more sensitive to the local smoothness area in a noise image, an adaptively local homogeneous block selection algorithm is proposed to construct a new homogeneous image named as homogeneity blocks (HB) based on computing each pixel characteristic. Second, applying the discrete cosine transform (DCT) for each HB and using high frequency component to evaluate image noise level. Finally, a modified peak signal to noise ratio (MPSNR) image quality assessment approach is proposed based on analysis DCT kurtosis distributions change and noise level above-mentioned. Simulations show that the quality scores that produced from the proposed algorithm are well correlated with the human perception of quality and also have a stability performance.

  18. Image quality optimization, via application of contextual contrast sensitivity and discrimination functions

    NASA Astrophysics Data System (ADS)

    Fry, Edward; Triantaphillidou, Sophie; Jarvis, John; Gupta, Gaurav

    2015-01-01

    What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies.

  19. Design and image quality results from volumetric CT with a flat-panel imager

    NASA Astrophysics Data System (ADS)

    Ross, William; Basu, Samit; Edic, Peter M.; Johnson, Mark; Pfoh, Armin H.; Rao, Ramakrishna; Ren, Baorui

    2001-06-01

    Preliminary MTF and LCD results obtained on several volumetric computed tomography (VCT) systems, employing amorphous flat panel technology, are presented. Constructed around 20-cm x 20-cm, 200-mm pitch amorphous silicon x-ray detectors, the prototypes use standard vascular or CT x-ray sources. Data were obtained from closed-gantry, benchtop and C-arm-based topologies, over a full 360 degrees of rotation about the target object. The field of view of the devices is approximately 15 cm, with a magnification of 1.25-1.5, providing isotropic resolution at isocenter of 133-160 mm. Acquisitions have been reconstructed using the FDK algorithm, modified by motion corrections also developed by GE. Image quality data were obtained using both industry standard and custom resolution phantoms as targets. Scanner output is compared on a projection and reconstruction basis against analogous output from a dedicated simulation package, also developed at GE. Measured MTF performance is indicative of a significant advance in isotropic image resolution over commercially available systems. LCD results have been obtained, using industry standard phantoms, spanning a contrast range of 0.3-1%. Both MTF and LCD measurements agree with simulated data.

  20. Exploiting the multiplicative nature of fluoroscopic image stochastic noise to enhance calcium imaging recording quality.

    PubMed

    Esposti, Federico; Ripamonti, Maddalena; Signorini, Maria G

    2009-01-01

    One of the main problems that affect fluoroscopic imaging is the difficulty in coupling the recorded activity with the morphological information. The comprehension of fluorescence events in relationship with the internal structure of the cell can be very difficult. At this purpose, we developed a new method able to maximize the fluoroscopic movie quality. The method (Maximum Intensity Enhancement, MIE) works as follow: considering all the frames that compose the fluoroscopic movie, the algorithm extracts, for each pixel of the matrix, the maximal brightness value assumed along all the frames. Such values are collected in a maximum intensity matrix. Then, the method provides the projection of the target molecule oscillations which are present in the DeltaF/F(0) movie onto the maximum intensity matrix. This is done by creating a RGB movie and by assigning to the normalized (DeltaF/F(0)) activity a single channel and by reproducing the maximum intensity matrix on all the frames by using the remaining color channels. The application of such a method to fluoroscopic calcium imaging of astrocyte cultures demonstrated a meaningful enhancement in the possibility to discern the internal and external structure of cells. PMID:19964305

  1. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  2. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting