Science.gov

Sample records for achievable image quality

  1. Achieving quality in cardiovascular imaging: proceedings from the American College of Cardiology-Duke University Medical Center Think Tank on Quality in Cardiovascular Imaging.

    PubMed

    Douglas, Pamela; Iskandrian, Ami E; Krumholz, Harlan M; Gillam, Linda; Hendel, Robert; Jollis, James; Peterson, Eric; Chen, Jersey; Masoudi, Frederick; Mohler, Emile; McNamara, Robert L; Patel, Manesh R; Spertus, John

    2006-11-21

    Cardiovascular imaging has enjoyed both rapid technological advances and sustained growth, yet less attention has been focused on quality than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met, and this report provides an overview of the discussions. A consensus definition of quality in imaging and a convergence of opinion on quality measures across imaging modalities was achieved and are intended to be the start of a process culminating in the development, dissemination, and adoption of quality measures for all cardiovascular imaging modalities.

  2. Achieving consistent image quality with dose optimization in 64-row multidetector computed tomography prospective ECG gated coronary calcium scoring.

    PubMed

    Pan, Zilai; Pang, Lifang; Li, Jianying; Zhang, Huan; Yang, Wenjie; Ding, Bei; Chai, Weimin; Chen, Kemin; Yao, Weiwu

    2011-04-01

    To evaluate the clinical value of a body mass index (BMI) based tube current (mA) selection method for obtaining consistent image quality with dose optimization in MDCT prospective ECG gated coronary calcium scoring. A formula for selecting mA to achieve desired image quality based on patient BMI was established using a control group (A) of 200 MDCT cardiac patients with a standard scan protocol. One hundred patients in Group B were scanned with this BMI-dependent mA for achieving a desired noise level of 18 HU at 2.5 mm slice thickness. The CTDIvol and image noise on the ascending aorta for the two groups were recorded. Two experienced radiologists quantitatively evaluated the image quality using scores of 1-4 with 4 being the highest. The image quality scores had no statistical difference (P = 0.71) at 3.89 ± 0.32, 3.87 ± 0.34, respectively, for groups A and B of similar BMI. The image noise in Group A had linear relationship with BMI. The image noise in Group B using BMI-dependent mA was independent of BMI with average value of 17.9 HU and smaller deviations for the noise values than in Group A (2.0 vs. 2.9 HU). There was a 35% dose reduction with BMI-dependent mA selection method on average with the lowest effective dose being only 0.35 mSv for patient with BMI of 18.3. A quantitative BMI-based mA selection method in MDCT prospective ECG gated coronary calcium scoring has been proposed to obtain a desired and consistent image quality and provide dose optimization across patient population.

  3. Dose reduction of up to 89% while maintaining image quality in cardiovascular CT achieved with prospective ECG gating

    NASA Astrophysics Data System (ADS)

    Londt, John H.; Shreter, Uri; Vass, Melissa; Hsieh, Jiang; Ge, Zhanyu; Adda, Olivier; Dowe, David A.; Sabllayrolles, Jean-Louis

    2007-03-01

    We present the results of dose and image quality performance evaluation of a novel, prospective ECG-gated Coronary CT Angiography acquisition mode (SnapShot Pulse, LightSpeed VCT-XT scanner, GE Healthcare, Waukesha, WI), and compare it to conventional retrospective ECG gated helical acquisition in clinical and phantom studies. Image quality phantoms were used to measure noise, slice sensitivity profile, in-plane resolution, low contrast detectability and dose, using the two acquisition modes. Clinical image quality and diagnostic confidence were evaluated in a study of 31 patients scanned with the two acquisition modes. Radiation dose reduction in clinical practice was evaluated by tracking 120 consecutive patients scanned with the prospectively gated scan mode. In the phantom measurements, the prospectively gated mode resulted in equivalent or better image quality measures at dose reductions of up to 89% compared to non-ECG modulated conventional helical scans. In the clinical study, image quality was rated excellent by expert radiologist reviewing the cases, with pathology being identical using the two acquisition modes. The average dose to patients in the clinical practice study was 5.6 mSv, representing 50% reduction compared to a similar patient population scanned with the conventional helical mode.

  4. Achieving Quality in Occupational Health

    NASA Technical Reports Server (NTRS)

    O'Donnell, Michele (Editor); Hoffler, G. Wyckliffe (Editor)

    1997-01-01

    The conference convened approximately 100 registered participants of invited guest speakers, NASA presenters, and a broad spectrum of the Occupational Health disciplines representing NASA Headquarters and all NASA Field Centers. Centered on the theme, "Achieving Quality in Occupational Health," conferees heard presentations from award winning occupational health program professionals within the Agency and from private industry; updates on ISO 9000 status, quality assurance, and information technologies; workshops on ergonomics and respiratory protection; an overview from the newly commissioned NASA Occupational Health Assessment Team; and a keynote speech on improving women's health. In addition, NASA occupational health specialists presented 24 poster sessions and oral deliveries on various aspects of current practice at their field centers.

  5. Image quality analyzer

    NASA Astrophysics Data System (ADS)

    Lukin, V. P.; Botugina, N. N.; Emaleev, O. N.; Antoshkin, L. V.; Konyaev, P. A.

    2012-07-01

    Image quality analyzer (IQA) which used as device for efficiency analysis of adaptive optics application is described. In analyzer marketed possibility estimations quality of images on three different criterions of quality images: contrast, sharpnesses and the spectral criterion. At present given analyzer is introduced on Big Solar Vacuum Telescope in stale work that allows at observations to conduct the choice of the most contrasting images of Sun. Is it hereinafter planned use the analyzer in composition of the ANGARA adaptive correction system.

  6. Gifted Student Academic Achievement and Program Quality

    ERIC Educational Resources Information Center

    Jordan, Katrina Ann Woolsey

    2010-01-01

    Gifted academic achievement has been identified as a major area of interest for educational researchers. The purpose of this study was to ascertain whether there was a relation between the quality of gifted programs as perceived by teachers, coordinators and supervisors of the gifted and the achievement of the same gifted students in 6th and 7th…

  7. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced

  8. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  9. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. Achievements and challenges of EUV mask imaging

    NASA Astrophysics Data System (ADS)

    Davydova, Natalia; van Setten, Eelco; de Kruif, Robert; Connolly, Brid; Fukugami, Norihito; Kodera, Yutaka; Morimoto, Hiroaki; Sakata, Yo; Kotani, Jun; Kondo, Shinpei; Imoto, Tomohiro; Rolff, Haiko; Ullrich, Albrecht; Lammers, Ad; Schiffelers, Guido; van Dijk, Joep

    2014-07-01

    The impact of various mask parameters on CDU combined in a total mask budget is presented, for 22 nm lines, for reticles used for NXE:3300 qualification. Apart from the standard mask CD measurements, actinic spectrometry of multilayer is used to qualify reflectance uniformity over the image field; advanced 3D metrology is applied for absorber profile characterization including absorber height and side wall angle. The predicted mask impact on CDU is verified using actual exposure data collected on multiple NXE:3300 scanners. Mask 3D effects are addressed, manifesting themselves in best focus shifts for different structures exposed with off-axis illumination. Experimental NXE:3300 results for 16 nm dense lines and 20 nm (semi-)isolated spaces are shown: best focus range reaches 24 nm. A mitigation strategy by absorber height optimization is proposed based on experimental results of a special mask with varying absorber heights. Further development of a black image border for EUV mask is considered. The image border is a pattern free area surrounding image field preventing exposure the image field neighborhood on wafer. Normal EUV absorber is not suitable for this purpose as it has 1-3% EUV reflectance. A current solution is etching of ML down to substrate reducing EUV reflectance to <0.05%. A next step in the development of the black border is the reduction of DUV Out-of-Band reflectance (<1.5%) in order to cope with DUV light present in EUV scanners. Promising results achieved in this direction are shown.

  11. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance. PMID:27468398

  12. Achieving Quality Learning in Higher Education.

    ERIC Educational Resources Information Center

    Nightingale, Peggy; O'Neil, Mike

    This volume on quality learning in higher education discusses issues of good practice particularly action learning and Total Quality Management (TQM)-type strategies and illustrates them with seven case studies in Australia and the United Kingdom. Chapter 1 discusses issues and problems in defining quality in higher education. Chapter 2 looks at…

  13. Achieving Quality Health Services for Adolescents.

    PubMed

    2016-08-01

    This update of the 2008 statement from the American Academy of Pediatrics redirects the discussion of quality health care from the theoretical to the practical within the medical home. This statement reviews the evolution of the medical home concept and challenges the provision of quality adolescent health care within the patient-centered medical home. Areas of attention for quality adolescent health care are reviewed, including developmentally appropriate care, confidentiality, location of adolescent care, providers who offer such care, the role of research in advancing care, and the transition to adult care. PMID:27432849

  14. Tradeoffs between image quality and dose.

    PubMed

    Seibert, J Anthony

    2004-10-01

    Image quality takes on different perspectives and meanings when associated with the concept of as low as reasonably achievable (ALARA), which is chiefly focused on radiation dose delivered as a result of a medical imaging procedure. ALARA is important because of the increased radiosensitivity of children to ionizing radiation and the desire to keep the radiation dose low. By the same token, however, image quality is also important because of the need to provide the necessary information in a radiograph in order to make an accurate diagnosis. Thus, there are tradeoffs to be considered between image quality and radiation dose, which is the main topic of this article. ALARA does not necessarily mean the lowest radiation dose, nor, when implemented, does it result in the least desirable radiographic images. With the recent widespread implementation of digital radiographic detectors and displays, a new level of flexibility and complexity confronts the technologist, physicist, and radiologist in optimizing the pediatric radiography exam. This is due to the separation of the acquisition, display, and archiving events that were previously combined by the screen-film detector, which allows for compensation for under- and overexposures, image processing, and on-line image manipulation. As explained in the article, different concepts must be introduced for a better understanding of the tradeoffs encountered when dealing with digital radiography and ALARA. In addition, there are many instances during the image acquisition/display/interpretation process in which image quality and associated dose can be compromised. This requires continuous diligence to quality control and feedback mechanisms to verify that the goals of image quality, dose and ALARA are achieved.

  15. Image Enhancement, Image Quality, and Noise

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2005-01-01

    The Multiscale Retinex With Color Restoration (MSRCR) is a non-linear image enhancement algorithm that provides simultaneous dynamic range compression, color constancy and rendition. The overall impact is to brighten up areas of poor contrast/lightness but not at the expense of saturating areas of good contrast/brightness. The downside is that with the poor signal-to-noise ratio that most image acquisition devices have in dark regions, noise can also be greatly enhanced thus affecting overall image quality. In this paper, we will discuss the impact of the MSRCR on the overall quality of an enhanced image as a function of the strength of shadows in an image, and as a function of the root-mean-square (RMS) signal-to-noise (SNR) ratio of the image.

  16. Achieving indoor air quality through contaminant control

    SciTech Connect

    Katzel, J.

    1995-07-10

    Federal laws outlining industry`s responsibilities in creating a healthy, hazard-free workspace are well known. OSHA`s laws on interior air pollution establish threshold limit values (TLVs) and permissible exposure limits (PELs) for more than 500 potentially hazardous substances found in manufacturing operations. Until now, OSHA has promulgated regulations only for the manufacturing environment. However, its recently-proposed indoor air quality (IAQ) ruling, if implemented, will apply to all workspaces. It regulates IAQ, including environmental tobacco smoke, and requires employers to write and implement IAQ compliance plans.

  17. The Relationship of Classroom Quality to Kindergarten Achievement

    ERIC Educational Resources Information Center

    Burson, Susan J.

    2010-01-01

    This quantitative study focuses on the relationship between classroom quality and children's academic achievement. Specifically, it examines how classroom quality in three broad domains-- emotional climate, classroom management and instructional support--impact kindergarten achievement growth in mathematics and reading. The researcher collected…

  18. Quality assessment for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2014-11-01

    Image quality assessment is an essential value judgement approach for many applications. Multi & hyper spectral imaging has more judging essentials than grey scale or RGB imaging and its image quality assessment job has to cover up all-around evaluating factors. This paper presents an integrating spectral imaging quality assessment project, in which spectral-based, radiometric-based and spatial-based statistical behavior for three hyperspectral imagers are jointly executed. Spectral response function is worked out based on discrete illumination images and its spectral performance is deduced according to its FWHM and spectral excursion value. Radiometric response ability of different spectral channel under both on-ground and airborne imaging condition is judged by SNR computing based upon local RMS extraction and statistics method. Spatial response evaluation of the spectral imaging instrument is worked out by MTF computing with slanted edge analysis method. Reported pioneering systemic work in hyperspectral imaging quality assessment is carried out with the help of several domestic dominating work units, which not only has significance in the development of on-ground and in-orbit instrument performance evaluation technique but also takes on reference value for index demonstration and design optimization for instrument development.

  19. Toward integrated strategies for achieving environmental quality

    SciTech Connect

    Kuusinen, T.; Lesperance, A.; Bilyard, G. )

    1994-03-01

    In the United States, environmentalists are constantly jumping from one environmental crisis of the day'' to another without any sense of what is important and what is trivial. Moreover, when designing fixes to the environmental problems one tries to resolve, one often comes up short. This country urgently needs a national environmental strategy that will approach environmental issues proactively and logically. Without such a strategy, the authors believe that long-term, sustainable economic growth cannot be achieved in the United States. This paper outlines a participatory process by which the framework for a national environmental strategy might be developed. It also proposes that such a strategy will likely include two fundamental components: (1) consensus principles for conducting risk assessments to decide what environmental problems are most important, and (2) a generalized, market-oriented model for resolving these problems. A viable national consensus will be required for such a strategy to succeed and will need to include industry, labor, legislators, regulators, national environmental advocacy groups, local grass roots organizations, and other interested parties.

  20. Quality management in cardiopulmonary imaging.

    PubMed

    Kanne, Jeffrey P

    2011-02-01

    Increased scrutiny of the practice of medicine by government, insurance providers, and individual patients has led to a rapid growth of quality management programs in health care. Radiology is no exception to this trend, and quality management has become an important issue for individual radiologists as well as their respective practices. Quality control has been a mainstay of the practice of radiology for many years, with quality assurance and quality improvement both relative newcomers. This article provides an overview of quality management in the context of cardiopulmonary imaging and describes specific areas of cardiopulmonary radiology in which the components of a quality management program can be integrated. Specific quality components are discussed, and examples of quality initiatives are provided.

  1. Visual Limits To Image Quality

    NASA Astrophysics Data System (ADS)

    Granger, Edward M.

    1985-07-01

    Today's high speed computers, large and inexpensive memory devices and high definition displays have opened up the area of electronic image processing. Computers are being used to compress,enhance,and geometrically correct a wide range of image related data. It is necessary to develop Image Quality Merit Factors (IOW) that can be used to evaluate, compare, and specify imaging systems. A meaningful IQMF will have to include both the effects of the transfer function of the system and the noise introduced by the system. Most of the methods used to date have utilized linear system techniques to describe performance. In our work on the IOMF, we have found that it may be necessary to imitate the eye-brain combination in order to best describe the performance of an imaging system. This paper presents the idea that understanding the organization of and the rivalry between visual mechanisms may lead to new ways of considering photographic and electronic system image quality and the loss in image quality due to grain, halftones, and pixel noise.

  2. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  3. Fovea based image quality assessment

    NASA Astrophysics Data System (ADS)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  4. Landsat image data quality studies

    NASA Technical Reports Server (NTRS)

    Schueler, C. F.; Salomonson, V. V.

    1985-01-01

    Preliminary results of the Landsat-4 Image Data Quality Analysis (LIDQA) program to characterize the data obtained using the Thematic Mapper (TM) instrument on board the Landsat-4 and Landsat-5 satellites are reported. TM design specifications were compared to the obtained data with respect to four criteria, including spatial resolution; geometric fidelity; information content; and image relativity to Multispectral Scanner (MSS) data. The overall performance of the TM was rated excellent despite minor instabilities and radiometric anomalies in the data. Spatial performance of the TM exceeded design specifications in terms of both image sharpness and geometric accuracy, and the image utility of the TM data was at least twice as high as MSS data. The separability of alfalfa and sugar beet fields in a TM image is demonstrated.

  5. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  6. The "Teacher's Image" as Predictor of Student Achievement

    ERIC Educational Resources Information Center

    Jungwirth, E.; Tamir, P.

    1973-01-01

    Reports the results of a study conducted at the Israeli Science Teaching Center, Hebrew University of Jerusalem, which attempted to correlate the Teacher's Image'' with actual student achievement in science. (JR)

  7. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  8. Quantitative image quality evaluation for cardiac CT reconstructions

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.

    2016-03-01

    Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.

  9. Improving secondary ion mass spectrometry image quality with image fusion.

    PubMed

    Tarolli, Jay G; Jackson, Lauren M; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  10. Improving secondary ion mass spectrometry image quality with image fusion.

    PubMed

    Tarolli, Jay G; Jackson, Lauren M; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  11. Aerial image retargeting (AIR): achieving litho-friendly designs

    NASA Astrophysics Data System (ADS)

    Yehia Hamouda, Ayman; Word, James; Anis, Mohab; Karim, Karim S.

    2011-04-01

    In this work, we present a new technique to detect non-Litho-Friendly design areas based on their Aerial Image signature. The aerial image is calculated for the litho target (pre-OPC). This is followed by the fixing (retargeting) the design to achieve a litho friendly OPC target. This technique is applied and tested on 28 nm metal layer and shows a big improvement in the process window performance. For an optimized Aerial-Image-Retargeting (AIR) recipe is very computationally efficient and its runtime doesn't consume more than 1% of the OPC flow runtime.

  12. Image quality assessment for CT used on small animals

    NASA Astrophysics Data System (ADS)

    Cisneros, Isabela Paredes; Agulles-Pedrós, Luis

    2016-07-01

    Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters using an acrylic phantom and then, using the computational tool MatLab, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.

  13. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  14. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  15. Likelihood of achieving air quality targets under model uncertainties.

    PubMed

    Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W

    2011-01-01

    Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses. PMID:21138291

  16. Likelihood of achieving air quality targets under model uncertainties.

    PubMed

    Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W

    2011-01-01

    Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.

  17. Achieving adequate BMP`s for stormwater quality management

    SciTech Connect

    Jones-Lee, A.; Lee, G.F.

    1994-12-31

    There is considerable controversy about the technical appropriateness and the cost-effectiveness of requiring cities to control contaminants in urban stormwater discharges to meet state water quality standards equivalent to US EPA numeric chemical water quality criteria. At this time and likely for the next 10 years, urban stormwater discharges will be exempt from regulation to achieve state water quality standards in receiving waters, owing to the high cost to cities of the management of contaminants in the stormwater runoff-discharge so as to prevent exceedances of water quality standards in the receiving waters. Instead of requiring the same degree of contaminant control for stormwater discharges as is required for point-source discharges of municipal and industrial wastewaters, those responsible for urban stormwater discharges will have to implement Best Management Practices (BMP`s) for contaminant control. The recommended approach for implementation of BMP`s involves the use of site-specific evaluations of what, if any, real problems (use impairment) are caused by stormwater-associated contaminants in the waters receiving that stormwater discharge. From this type of information BMP`s can then be developed to control those contaminants in stormwater discharges that are, in fact, impairing the beneficial uses of receiving waters.

  18. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  19. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression. PMID:23589187

  20. Image quality scaling of electrophotographic prints

    NASA Astrophysics Data System (ADS)

    Johnson, Garrett M.; Patil, Rohit A.; Montag, Ethan D.; Fairchild, Mark D.

    2003-12-01

    Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.

  1. WFC3 UVIS Image Quality

    NASA Astrophysics Data System (ADS)

    Dressel, Linda

    2009-07-01

    The UVIS imaging performance over the detector will be assessed periodically {every 4 months} in two passbands {F275W and F621M} to check for image stability. The field around star 58 in the open cluster NGC188 is the chosen target because it is sufficiently dense to provide good sampling over the FOV while providing enough isolated stars to permit accurate PSF {point spread function} measurement. It is available year-round and used previously for ACS image quality assessment. The field is astrometric, and astrometric guide stars will be used, so that the plate scale and image orientation may also be determined if necessary {as in SMOV proposals 11436 and 11442}. Full frame images will be obtained at each of 4 POSTARG offset positions designed to improve sampling over the detector.This proposal is a periodic repeat {once every 4 months} of visits similar to those in SMOV proposal 11436 {activity ID WFC3-23}. The data will be analyzed using the code and techniques described in ISR WFC3 2008-40 {Hartig}. Profiles of encircled energy will be monitored and presented in an ISR. If an update to the SIAF is needed, {V2,V3} locations of stars will be obtained from the Flight Ops Sensors and Calibrations group at GSFC, the {V2,V3} of the reference pixel and the orientation of the detector will be determined by the WFC3 group, and the Telescopes group will update and deliver the SIAF to the PRDB branch.The specific PSF metrics to be examined are encircled energy for aperture diameter 0.15, 0.20, 0.25, and 0.35 arcsec, FWHM, and sharpness. {See ISR WFC3 2008-40 tables 2 and 3 and preceding text.} about 20 stars distributed over the detector will be measured in each exposure for each filter. The mean, rms, and rms of the mean will be determined for each metric. The values determined from each of the 4 exposures per filter within a visit will be compared to each other to see to what extent they are affected by "breathing". Values will be compared from visit to visit, starting

  2. WFC3 IR Image Quality

    NASA Astrophysics Data System (ADS)

    Dressel, Linda

    2009-07-01

    The IR imaging performance over the detector will be assessed periodically {every 4 months} in two passbands to check for image stability. The field around star 58 in the open cluster NGC188 is the chosen target because it is sufficiently dense to provide good sampling over the FOV while providing enough isolated stars to permit accurate PSF {point spread function} measurement. It is available year-round and used previously for ACS image quality assessment. The field is astrometric, and astrometric guide stars will be used, so that the plate scale and image orientation may also be determined if necessary {as in SMOV proposals 11437 and 11443}. Full frame images will be obtained at each of 4 POSTARG offset positions designed to improve sampling over the detector in F098M, F105W, and F160W. The PSFs will be sampled at 4 positions with subpixel shifts in filters F164N and F127M.This proposal is a periodic repeat {once every 4 months} of the visits in SMOV proposal 11437 {activity ID WFC3-24}. The data will be analyzed using the code and techniques described in ISR WFC3 2008-41 {Hartig}. Profiles of encircled energy will be monitored and presented in an ISR. If an update to the SIAF is needed, {V2,V3} locations of stars will be obtained from the Flight Ops Sensors and Calibrations group at GSFC, the {V2,V3} of the reference pixel and the orientation of the detector will be determined by the WFC3 group, and the Telescopes group will update and deliver the SIAF to the PRDB branch.The specific PSF metrics to be examined are encircled energy for aperture diameter 0.25, 0.37, and 0.60 arcsec, FWHM, and sharpness. {See ISR WFC3 2008-41 tables 2 and 3 and preceding text.} 20 stars distributed over the detector will be measured in each exposure for each filter. The mean, rms, and rms of the mean will be determined for each metric. The values determined from each of the 4 exposures per filter within a visit will be compared to each other to see to what extent they are affected

  3. Quality Science Teacher Professional Development and Student Achievement

    NASA Astrophysics Data System (ADS)

    Dubner, J.

    2007-12-01

    Studies show that socio-economic background and parental education accounts for 50-60 percent of a child's achievement in school. School, and other influences, account for the remaining 40-50 percent. In contrast to most other professions, schools require no real apprenticeship training of science teachers. Overall, only 38 percent of United States teachers have had any on-the-job training in their first teaching position, and in some cases this consisted of a few meetings over the course of a year between the beginning teacher and the assigned mentor or master teacher. Since individual teachers determine the bulk of a student's school experiences, interventions focused on teachers have the greatest likelihood of affecting students. To address this deficiency, partnerships between scientists and K-12 teachers are increasingly recognized as an excellent method for improving teacher preparedness and the quality of science education. Columbia University's Summer Research Program for Science Teachers' (founded in 1990) basic premise is simple: teachers cannot effectively teach science if they have no firsthand experience doing science, hence the Program's motto, "Practice what you teach." Columbia University's Summer Research Program for Science Teachers provides strong evidence that a teacher research program is a very effective form of professional development for secondary school science teachers and has a direct correlation to increased student achievement in science. The author will present the methodology of the program's evaluation citing statistically significant data. The author will also show the economic benefits of teacher participation in this form of professional development.

  4. Combined terahertz imaging system for enhanced imaging quality

    NASA Astrophysics Data System (ADS)

    Dolganova, Irina N.; Zaytsev, Kirill I.; Metelkina, Anna A.; Yakovlev, Egor V.; Karasik, Valeriy E.; Yurchenko, Stanislav O.

    2016-06-01

    An improved terahertz (THz) imaging system is proposed for enhancing image quality. Imaging scheme includes THz source and detection system operated in active mode as well as in passive one. In order to homogeneously illuminate the object plane the THz reshaper is proposed. The form and internal structure of the reshaper were studied by the numerical simulation. Using different test-objects we compare imaging quality in active and passive THz imaging modes. Imaging contrast and modulation transfer functions in active and passive imaging modes show drawbacks of them in high and low spatial frequencies, respectively. The experimental results confirm the benefit of combining both imaging modes into hybrid one. The proposed algorithm of making hybrid THz image is an effective approach of retrieving maximum information about the remote object.

  5. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  6. Automatic quality assessment of planetary images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, P.; Muller, J.-P.

    2015-10-01

    A significant fraction of planetary images are corrupted beyond the point that much scientific meaning can be extracted. For example, transmission errors result in missing data which is unrecoverable. The available planetary image datasets include many such "bad data", which both occupy valuable scientific storage resources and create false impressions about planetary image availability for specific planetary objects or target areas. In this work, we demonstrate a pipeline that we have developed to automatically assess the quality of planetary images. Additionally, this method discriminates between different types of image degradation, such as low-quality originating from camera flaws or low-quality triggered by atmospheric conditions, etc. Examples of quality assessment results for Viking Orbiter imagery will be also presented.

  7. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  8. Image Quality Ranking Method for Microscopy

    NASA Astrophysics Data System (ADS)

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-07-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics.

  9. End-to-end image quality assessment

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    2012-05-01

    An innovative computerized benchmarking approach (US Patent pending Sep 2011) based on extensive application of photometry, geometrical optics, and digital media using a randomized target, for a standard observer to assess the image quality of video imaging systems, at different day time, and low-light luminance levels. It takes into account, the target's contrast and color characteristics, as well as the observer's visual acuity and dynamic response. This includes human vision as part of the "extended video imaging system" (EVIS), and allows image quality assessment by several standard observers simultaneously.

  10. Cartographic quality of ERTS-1 images

    NASA Technical Reports Server (NTRS)

    Welch, R. I.

    1973-01-01

    Analyses of simulated and operational ERTS images have provided initial estimates of resolution, ground resolution, detectability thresholds and other measures of image quality of interest to earth scientists and cartographers. Based on these values, including an approximate ground resolution of 250 meters for both RBV and MSS systems, the ERTS-1 images appear suited to the production and/or revision of planimetric and photo maps of 1:500,000 scale and smaller for which map accuracy standards are compatible with the imaged detail. Thematic mapping, although less constrained by map accuracy standards, will be influenced by measurement thresholds and errors which have yet to be accurately determined for ERTS images. This study also indicates the desirability of establishing a quantitative relationship between image quality values and map products which will permit both engineers and cartographers/earth scientists to contribute to the design requirements of future satellite imaging systems.

  11. Does High School Facility Quality Affect Student Achievement? A Two-Level Hierarchical Linear Model

    ERIC Educational Resources Information Center

    Bowers, Alex J.; Urick, Angela

    2011-01-01

    The purpose of this study is to isolate the independent effects of high school facility quality on student achievement using a large, nationally representative U.S. database of student achievement and school facility quality. Prior research on linking school facility quality to student achievement has been mixed. Studies that relate overall…

  12. Continuous assessment of perceptual image quality

    NASA Astrophysics Data System (ADS)

    Hamberg, Roelof; de Ridder, Huib

    1995-12-01

    The study addresses whether subjects are able to assess the perceived quality of an image sequence continuously. To this end, a new method for assessing time-varying perceptual image quality is presented by which subjects continuously indicate the perceived strength of image quality by moving a slider along a graphical scale. The slider's position on this scale is sampled every second. In this way, temporal variations in quality can be monitored quantitatively, and a means is provided by which differences between, for example, alternative transmission systems can be analyzed in an informative way. The usability of this method is illustrated by an experiment in which, for a period of 815 s, subjects assessed the quality of still pictures comprising time-varying degrees of sharpness. Copyright (c) 1995 Optical Society of America

  13. Rendered virtual view image objective quality assessment

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Li, Xiangchun; Zhang, Yi; Peng, Kai

    2013-08-01

    The research on rendered virtual view image (RVVI) objective quality assessment is important for integrated imaging system and image quality assessment (IQA). Traditional IQA algorithms cannot be applied directly on the system receiver-side due to interview displacement and the absence of original reference. This study proposed a block-based neighbor reference (NbR) IQA framework for RVVI IQA. Neighbor views used for rendering are employed for quality assessment in the proposed framework. A symphonious factor handling noise and interview displacement is defined and applied to evaluate the contribution of the obtained quality index in each block pair. A three-stage experiment scheme is also presented to testify the proposed framework and evaluate its homogeneity performance when comparing to full reference IQA. Experimental results show the proposed framework is useful in RVVI objective quality assessment at system receiver-side and benchmarking different rendering algorithms.

  14. Image Acquisition and Quality in Digital Radiography.

    PubMed

    Alexander, Shannon

    2016-09-01

    Medical imaging has undergone dramatic changes and technological breakthroughs since the introduction of digital radiography. This article presents information on the development of digital radiography and types of digital radiography systems. Aspects of image quality and radiation exposure control are highlighted as well. In addition, the article includes related workplace changes and medicolegal considerations in the digital radiography environment. PMID:27601691

  15. Image quality and automatic color equalization

    NASA Astrophysics Data System (ADS)

    Chambah, M.; Rizzi, A.; Saint Jean, C.

    2007-01-01

    In the professional movie field, image quality is mainly judged visually. In fact, experts and technicians judge and determine the quality of the film images during the calibration (post production) process. As a consequence, the quality of a restored movie is also estimated subjectively by experts [26,27]. On the other hand, objective quality metrics do not necessarily correlate well with perceived quality [28]. Moreover, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their use in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements of the field [29,25]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. The ACE method, for Automatic Color Equalization [1,2], is an algorithm for digital images unsupervised enhancement. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. We present in this paper is the use of ACE as a basis of a reference free image quality metric. ACE output is an estimate of our visual perception of a scene. The assumption, tested in other papers [3,4], is that ACE enhancing images is in the way our vision system will perceive them, increases their overall perceived quality. The basic idea proposed in this paper, is that ACE output can differ from the input more or less according to the visual quality of the input image In other word, an image appears good if it is near to the visual appearance we (estimate to) have of it. Reversely bad quality images will need "more filtering". Test

  16. Holographic projection with higher image quality.

    PubMed

    Qu, Weidong; Gu, Huarong; Tan, Qiaofeng

    2016-08-22

    The spatial resolution limited by the size of the spatial light modulator (SLM) in the holographic projection can hardly be increased, and speckle noise always appears to induce the degradation of image quality. In this paper, the holographic projection with higher image quality is presented. The spatial resolution of the reconstructed image is 2 times of that of the existing holographic projection, and speckles are suppressed well at the same time. Finally, the effectiveness of the holographic projection is verified in experiments. PMID:27557197

  17. Perceptual image quality and telescope performance ranking

    NASA Astrophysics Data System (ADS)

    Lentz, Joshua K.; Harvey, James E.; Marshall, Kenneth H.; Salg, Joseph; Houston, Joseph B.

    2010-08-01

    Launch Vehicle Imaging Telescopes (LVIT) are expensive, high quality devices intended for improving the safety of vehicle personnel, ground support, civilians, and physical assets during launch activities. If allowed to degrade from the combination of wear, environmental factors, and ineffective or inadequate maintenance, these devices lose their ability to provide adequate quality imagery to analysts to prevent catastrophic events such as the NASA Space Shuttle, Challenger, accident in 1986 and the Columbia disaster of 2003. A software tool incorporating aberrations and diffraction that was developed for maintenance evaluation and modeling of telescope imagery is presented. This tool provides MTF-based image quality metric outputs which are correlated to ascent imagery analysts' perception of image quality, allowing a prediction of usefulness of imagery which would be produced by a telescope under different simulated conditions.

  18. Average glandular dose and phantom image quality in mammography

    NASA Astrophysics Data System (ADS)

    Oliveira, M.; Nogueira, M. S.; Guedes, E.; Andrade, M. C.; Peixoto, J. E.; Joana, G. S.; Castro, J. G.

    2007-09-01

    Doses in mammography should be maintained as low as possible without reducing the high image quality needed for early detection of the breast cancer. The breast is composed of tissues with very close composition and densities. It increases the difficulty to detect small changes in the normal anatomical structures which may be associated with breast cancer. To achieve the standards of definition and contrast for mammography, the quality and intensity of the X-ray beam, the breast positioning and compression, the film-screen system, and the film processing have to be in optimal operational conditions. This study sought to evaluate average glandular dose (AGD) and image quality on a standard phantom in 134 mammography units in the state of Minas Gerais, Brazil, between December 2004 and May 2006. AGDs were obtained by means of entrance kerma measured with TL LiF100 dosimeters on phantom surface. Phantom images were obtained with automatic exposure technique, fixed 28 kV and molybdenum anode-filter combination. The phantom used contained structures simulating tumoral masses, microcalcifications, fibers and low contrast areas. High-resolution metallic meshes to assess image definition and a stepwedge to measure image contrast index were also inserted in the phantom. The visualization of simulated structures, the mean optical density and the contrast index allowed to classify the phantom image quality in a seven-point scale. The results showed that 54.5% of the facilities did not achieve the minimum performance level for image quality. It is mainly due to insufficient film processing observed in 61.2% of the units. AGD varied from 0.41 to 2.73 mGy with a mean value of 1.32±0.44 mGy. In all optimal quality phantom images, AGDs were in this range. Additionally, in 7.3% of the mammography units, the AGD constraint of 2 mGy was exceeded. One may conclude that dose level to patient and image quality are not in conformity to regulations in most of the facilities. This

  19. Using Collaborative Course Development to Achieve Online Course Quality Standards

    ERIC Educational Resources Information Center

    Chao, Ining Tracy; Saj, Tami; Hamilton, Doug

    2010-01-01

    The issue of quality is becoming front and centre as online distance education moves into the mainstream of higher education. Many believe collaborative course development is the best way to design quality online courses. This research uses a case study approach to probe into the collaborative course development process and the implementation of…

  20. How healthcare organizations use the Internet to market quality achievements.

    PubMed

    Revere, Lee; Robinson, Leroy

    2010-01-01

    The increasingly competitive environment is having a strong bearing on the strategic marketing practices of hospitals. The Internet is a fairly new marketing tool, and it has the potential to dramatically influence healthcare consumers. This exploratory study investigates how hospitals use the Internet as a tool to market the quality of their services. Significant evidence exists that customers use the Internet to find information about potential healthcare providers, including information concerning quality. Data were collected from a random sample of 45 U.S. hospitals from the American Hospital Association database. The data included hospital affiliation, number of staffed beds, accreditation status, Joint Commission quality awards, and number of competing hospitals. The study's findings show that system-affiliated hospitals do not provide more, or less, quality information on their websites than do non-system-affiliated hospitals. The findings suggest that the amount of quality information provided on a hospital website is not dependent on hospital size. Research provides evidence that hospitals with more Joint Commission awards promote their quality accomplishments more so than their counterparts that earned fewer Joint Commission awards. The findings also suggest that the more competitors in a marketplace the more likely a hospital is to promote its quality as a potential differential advantage. The study's findings indicate that a necessary element of any hospital's competitive strategy should be to include the marketing of its quality on the organization's website.

  1. Peripheral Aberrations and Image Quality for Contact Lens Correction

    PubMed Central

    Shen, Jie; Thibos, Larry N.

    2011-01-01

    Purpose Contact lenses reduced the degree of hyperopic field curvature present in myopic eyes and rigid contact lenses reduced sphero-cylindrical image blur on the peripheral retina, but their effect on higher order aberrations and overall optical quality of the eye in the peripheral visual field is still unknown. The purpose of our study was to evaluate peripheral wavefront aberrations and image quality across the visual field before and after contact lens correction. Methods A commercial Hartmann-Shack aberrometer was used to measure ocular wavefront errors in 5° steps out to 30° of eccentricity along the horizontal meridian in uncorrected eyes and when the same eyes are corrected with soft or rigid contact lenses. Wavefront aberrations and image quality were determined for the full elliptical pupil encountered in off-axis measurements. Results Ocular higher-order aberrations increase away from fovea in the uncorrected eye. Third-order aberrations are larger and increase faster with eccentricity compared to the other higher-order aberrations. Contact lenses increase all higher-order aberrations except 3rd-order Zernike terms. Nevertheless, a net increase in image quality across the horizontal visual field for objects located at the foveal far point is achieved with rigid lenses, whereas soft contact lenses reduce image quality. Conclusions Second order aberrations limit image quality more than higher-order aberrations in the periphery. Although second-order aberrations are reduced by contact lenses, the resulting gain in image quality is partially offset by increased amounts of higher-order aberrations. To fully realize the benefits of correcting higher-order aberrations in the peripheral field requires improved correction of second-order aberrations as well. PMID:21873925

  2. Measurement and control of color image quality

    NASA Astrophysics Data System (ADS)

    Schneider, Eric; Johnson, Kate; Wolin, David

    1998-12-01

    Color hardcopy output is subject to many of the same image quality concerns as monochrome hardcopy output. Line and dot quality, uniformity, halftone quality, the presence of bands, spots or deletions are just a few by both color and monochrome output. Although measurement of color requires the use of specialized instrumentation, the techniques used to assess color-dependent image quality attributes on color hardcopy output are based on many of the same techniques as those used in monochrome image quality quantification. In this paper we will be presenting several different aspects of color quality assessment in both R and D and production environments. As well as present several examples of color quality measurements that are similar to those currently being used at Hewlett-Packard to characterize color devices and to verify system performance. We will then discuss some important considerations for choosing appropriate color quality measurement equipment for use in either R and D or production environments. Finally, we will discuss the critical relationship between objective measurements and human perception.

  3. A database for spectral image quality

    NASA Astrophysics Data System (ADS)

    Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve

    2015-01-01

    We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.

  4. Evaluation of image quality in computed radiography based mammography systems

    NASA Astrophysics Data System (ADS)

    Singh, Abhinav; Bhwaria, Vipin; Valentino, Daniel J.

    2011-03-01

    Mammography is the most widely accepted procedure for the early detection of breast cancer and Computed Radiography (CR) is a cost-effective technology for digital mammography. We have demonstrated that CR mammography image quality is viable for Digital Mammography. The image quality of mammograms acquired using Computed Radiography technology was evaluated using the Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE). The measurements were made using a 28 kVp beam (RQA M-II) using 2 mm of Al as a filter and a target/filter combination of Mo/Mo. The acquired image bit depth was 16 bits and the pixel pitch for scanning was 50 microns. A Step-Wedge phantom (to measure the Contrast-to-noise ratio (CNR)) and the CDMAM 3.4 Contrast Detail phantom were also used to assess the image quality. The CNR values were observed at varying thickness of PMMA. The CDMAM 3.4 phantom results were plotted and compared to the EUREF acceptable and achievable values. The effect on image quality was measured using the physics metrics. A lower DQE was observed even with a higher MTF. This could be possibly due to a higher noise component present due to the way the scanner was configured. The CDMAM phantom scores demonstrated a contrast-detail comparable to the EUREF values. A cost-effective CR machine was optimized for high-resolution and high-contrast imaging.

  5. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  6. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  7. Quality and Early Field Experiences: Partnering with Junior Achievement

    ERIC Educational Resources Information Center

    Piro, Jody S.; Anderson, Gina; Fredrickson, Rebecca

    2015-01-01

    This study explored the perceptions of preservice teacher candidates who participated in a pilot partnership between a public teacher education preparation program and Junior Achievement (JA). The partnership was grounded in the premise that providing early field experiences to preservice teacher candidates was a necessary requirement of quality…

  8. Maximising image quality in small spaces.

    PubMed

    Alford, Arezoo; Brinkworth, Simon

    2015-06-01

    A Medical Illustration Department may need to set up a studio in a space that is not designed for that purpose. This joint paper describes the attempts of two separate trusts, University Hospitals Bristol NHS Foundation Trust (UHB) and Norfolk & Norwich University Hospitals (NNUH), to refurbish unusually small studio spaces of 4m × 2m. Each trust had a substantially different project budget and faced separate obstacles, but both had a shared aim; to maximise the limited studio space and enhance the quality of images produced. The outcome at both Trusts is a significant improvement in image quality.

  9. Subjective matters: from image quality to image psychology

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; De Ridder, Huib

    2013-03-01

    From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.

  10. Quantification of image quality using information theory.

    PubMed

    Niimi, Takanaga; Maeda, Hisatoshi; Ikeda, Mitsuru; Imai, Kuniharu

    2011-12-01

    Aims of present study were to examine usefulness of information theory in visual assessment of image quality. We applied first order approximation of the Shannon's information theory to compute information losses (IL). Images of a contrast-detail mammography (CDMAM) phantom were acquired with computed radiographies for various radiation doses. Information content was defined as the entropy Σp( i )log(1/p ( i )), in which detection probabilities p ( i ) were calculated from distribution of detection rate of the CDMAM. IL was defined as the difference between information content and information obtained. IL decreased with increases in the disk diameters (P < 0.0001, ANOVA) and in the radiation doses (P < 0.002, F-test). Sums of IL, which we call total information losses (TIL), were closely correlated with the image quality figures (r = 0.985). TIL was dependent on the distribution of image reading ability of each examinee, even when average reading ratio was the same in the group. TIL was shown to be sensitive to the observers' distribution of image readings and was expected to improve the evaluation of image quality.

  11. Characteristic functionals in imaging and image-quality assessment: tutorial.

    PubMed

    Clarkson, Eric; Barrett, Harrison H

    2016-08-01

    Characteristic functionals are one of the main analytical tools used to quantify the statistical properties of random fields and generalized random fields. The viewpoint taken here is that a random field is the correct model for the ensemble of objects being imaged by a given imaging system. In modern digital imaging systems, random fields are not used to model the reconstructed images themselves since these are necessarily finite dimensional. After a brief introduction to the general theory of characteristic functionals, many examples relevant to imaging applications are presented. The propagation of characteristic functionals through both a binned and list-mode imaging system is also discussed. Methods for using characteristic functionals and image data to estimate population parameters and classify populations of objects are given. These methods are based on maximum likelihood and maximum a posteriori techniques in spaces generated by sampling the relevant characteristic functionals through the imaging operator. It is also shown how to calculate a Fisher information matrix in this space. These estimators and classifiers, and the Fisher information matrix, can then be used for image quality assessment of imaging systems.

  12. Does resolution really increase image quality?

    NASA Astrophysics Data System (ADS)

    Tisse, Christel-Loïc; Guichard, Frédéric; Cao, Frédéric

    2008-02-01

    A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels) while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies (1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio (SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel technologies.

  13. Image Quality Analysis of Various Gastrointestinal Endoscopes: Why Image Quality Is a Prerequisite for Proper Diagnostic and Therapeutic Endoscopy.

    PubMed

    Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo; Cho, Joo Young

    2015-09-01

    Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality.

  14. Image Quality Analysis of Various Gastrointestinal Endoscopes: Why Image Quality Is a Prerequisite for Proper Diagnostic and Therapeutic Endoscopy

    PubMed Central

    Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo

    2015-01-01

    Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality. PMID:26473119

  15. Image Quality Indicator for Infrared Inspections

    NASA Technical Reports Server (NTRS)

    Burke, Eric

    2011-01-01

    The quality of images generated during an infrared thermal inspection depends on many system variables, settings, and parameters to include the focal length setting of the IR camera lens. If any relevant parameter is incorrect or sub-optimal, the resulting IR images will usually exhibit inherent unsharpness and lack of resolution. Traditional reference standards and image quality indicators (IQIs) are made of representative hardware samples and contain representative flaws of concern. These standards are used to verify that representative flaws can be detected with the current IR system settings. However, these traditional standards do not enable the operator to quantify the quality limitations of the resulting images, i.e. determine the inherent maximum image sensitivity and image resolution. As a result, the operator does not have the ability to optimize the IR inspection system prior to data acquisition. The innovative IQI described here eliminates this limitation and enables the operator to objectively quantify and optimize the relevant variables of the IR inspection system, resulting in enhanced image quality with consistency and repeatability in the inspection application. The IR IQI consists of various copper foil features of known sizes that are printed on a dielectric non-conductive board. The significant difference in thermal conductivity between the two materials ensures that each appears with a distinct grayscale or brightness in the resulting IR image. Therefore, the IR image of the IQI exhibits high contrast between the copper features and the underlying dielectric board, which is required to detect the edges of the various copper features. The copper features consist of individual elements of various shapes and sizes, or of element-pairs of known shapes and sizes and with known spacing between the elements creating the pair. For example, filled copper circles with various diameters can be used as individual elements to quantify the image sensitivity

  16. Quality evaluation of fruit by hyperspectral imaging

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter presents new applications of hyperspectral imaging for measuring the optical properties of fruits and assessing their quality attributes. A brief overview is given of current techniques for measuring optical properties of turbid and opaque biological materials. Then a detailed descripti...

  17. Quality and fairness: achieving the paradigm. A personal perspective.

    PubMed

    Laurencin, C T

    1999-05-01

    African Americans and those traditionally underrepresented and discriminated against by society face particular roadblocks in achieving success. In the case of training for orthopaedic surgery selection, this paper presents fundamental concepts addressing misconceptions regarding the pool of underrepresented applicants for positions in orthopaedic surgery residency and the notion and definition of the terms qualified and best qualified regarding the residency applicant pool. The paper underscores the fact, that despite advances in the century, racism continues to be pervasive in America, and efforts at leveling the playing field should be used.

  18. Achieving high-value cardiac imaging: challenges and opportunities.

    PubMed

    Wiener, David H

    2014-01-01

    Cardiac imaging is under intense scrutiny as a contributor to health care costs, with multiple initiatives under way to reduce and eliminate inappropriate testing. Appropriate use criteria are valuable guides to selecting imaging studies but until recently have focused on the test rather than the patient. Patient-centered means are needed to define the true value of imaging for patients in specific clinical situations. This article provides a definition of high-value cardiac imaging. A paradigm to judge the efficacy of echocardiography in the absence of randomized controlled trials is presented. Candidate clinical scenarios are proposed in which echocardiography constitutes high-value imaging, as well as stratagems to increase the likelihood that high-value cardiac imaging takes place in those circumstances.

  19. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  20. Achieving molecular selectivity in imaging using multiphoton Raman spectroscopy techniques

    SciTech Connect

    Holtom, Gary R. ); Thrall, Brian D. ); Chin, Beek Yoke ); Wiley, H Steven ); Colson, Steven D. )

    2000-12-01

    In the case of most imaging methods, contrast is generated either by physical properties of the sample (Differential Image Contrast, Phase Contrast), or by fluorescent labels that are localized to a particular protein or organelle. Standard Raman and infrared methods for obtaining images are based upon the intrinsic vibrational properties of molecules, and thus obviate the need for attached flurophores. Unfortunately, they have significant limitations for live-cell imaging. However, an active Raman method, called Coherent Anti-Stokes Raman Scattering (CARS), is well suited for microscopy, and provides a new means for imaging specific molecules. Vibrational imaging techniques, such as CARS, avoid problems associated with photobleaching and photo-induced toxicity often associated with the use of fluorescent labels with live cells. Because the laser configuration needed to implement CARS technology is similar to that used in other multiphoton microscopy methods, such as two -photon fluorescence and harmonic generation, it is possible to combine imaging modalities, thus generating simultaneous CARS and fluorescence images. A particularly powerful aspect of CARS microscopy is its ability to selectively image deuterated compounds, thus allowing the visualization of molecules, such as lipids, that are chemically indistinguishable from the native species.

  1. Physical measures of image quality in mammography

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.

    1996-04-01

    A recently introduced method for quantitative analysis of images of the American College of Radiology (ACR) mammography accreditation phantom has been extended to include signal- to-noise-ratio (SNR) measurements, and has been applied to survey the image quality of 54 mammography machines from 17 hospitals. Participants sent us phantom images to be evaluated for each mammography machine at their hospital. Each phantom was loaned to us for obtaining images of the wax insert plate on a reference machine at our institution. The images were digitized and analyzed to yield indices that quantified the image quality of the machines precisely. We have developed methods for normalizing for the variation of the individual speck sizes between different ACR phantoms, for the variation of the speck sizes within a microcalcification group, and for variations in overall speeds of the mammography systems. In terms of the microcalcification SNR, the variability of the x-ray machines was 40.5% when no allowance was made for phantom or mAs variations. This dropped to 17.1% when phantom variability was accounted for, and to 12.7% when mAs variability was also allowed for. Our work shows the feasibility of practical, low-cost, objective and accurate evaluations, as a useful adjunct to the present ACR method.

  2. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  3. Detection of image quality metamers based on the metric for unified image quality

    NASA Astrophysics Data System (ADS)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2012-01-01

    In this paper, we introduce a concept of the image quality metamerism as an expanded version of the metamerism defined in the color science. The concept is used to unify different image quality attributes, and applied to introduce a metric showing the degree of image quality metamerism to analyze a cultural property. Our global goal is to build a metric to evaluate total quality of images acquired by different imaging systems and observed under different viewing conditions. As the basic step to the global goal, the metric is consisted of color, spectral and texture information in this research, and applied to detect image quality metamers to investigate the cultural property. The property investigated is the oldest extant version of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold colored areas painted by using high granularity colorants compared with other color areas in the property are evaluated based on the metric, then the metric is visualized as a map showing the possibility of the image quality metamer to the reference pixel.

  4. High Image Quality Laser Color Printer

    NASA Astrophysics Data System (ADS)

    Nagao, Kimitoshi; Morimoto, Yoshinori

    1989-07-01

    A laser color printer has been developed to depict continuous tone color images on a photographic color film or color paper with high resolution and fidelity. We have used three lasers, He-Cd (441.6 nm), Ar4+ (514.5 nm), and He-Ne (632.8 nm) for blue, green, and red exposures. We have employed a drum scanner for two dimensional scanning. The maximum resolution of our system is 40 c/mm (80 lines/mm) and the accuracy of density reproduction is within 1.0 when measured in color difference, where most observers can not distinguish the difference. The scanning artifacts and noise are diminished to a visually negligible level. The image quality of output images compares well to that of actual color photographs, and is suitable for photographic image simulations.

  5. Blind image quality assessment via deep learning.

    PubMed

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness. PMID:25122842

  6. Blind image quality assessment via deep learning.

    PubMed

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.

  7. Quality in university physics teaching: is it being achieved?

    NASA Astrophysics Data System (ADS)

    1998-11-01

    This was the title of a Physics Discipline Workshop held at the University of Leeds on 10 and 11 September 1998. Organizer Ashley Clarke of the university's Physics and Astronomy Department collected together an interesting variety of speakers polygonically targeting the topic, although as workshops go the audience didn't have to do much work except listen. There were representatives from 27 university physics departments who must have gone away with a lot to think about and possibly some new academic year resolutions to keep. But as a non-university no-longer teacher of (school) physics I was impressed with the general commitment to the idea that if you get the right quality of learning the teaching must be OK. I also learned (but have since forgotten) a lot of new acronyms. The keynote talk was by Gillian Hayes, Associate Director of the Quality Assurance Agency for Higher Education (QAA). She explained the role and implementation of the Subject Reviews that QAA is making for all subjects in all institutions of higher education on a five- to seven-year cycle. Physics Education hopes to publish an article about all this from QAA shortly. In the meantime, suffice it to say that the review looks at six aspects of provision, essentially from the point of view of enhancing students' experiences and learning. No doubt all participants would agree with this (they'd better if they want to score well on the Review) but may have been more worried by the next QAA speaker, Norman Jackson, who drummed in the basic facts of life as HE moves from an elite provision system to a mass provision system. He had an interesting graph showing how in the last ten years or so more students were getting firsts and upper seconds and fewer getting thirds. It seems that all those A-level students getting better grades than they used to are carrying on their good luck to degree level. But they still can't do maths (allegedly) and I doubt whether Jon Ogborn (IoP Advancing Physics Project

  8. Dried fruits quality assessment by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  9. Objective assessment of image quality VI: imaging in radiation therapy

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Kupinski, Matthew A.; Müeller, Stefan; Halpern, Howard J.; Morris, John C., III; Dwyer, Roisin

    2013-11-01

    Earlier work on objective assessment of image quality (OAIQ) focused largely on estimation or classification tasks in which the desired outcome of imaging is accurate diagnosis. This paper develops a general framework for assessing imaging quality on the basis of therapeutic outcomes rather than diagnostic performance. By analogy to receiver operating characteristic (ROC) curves and their variants as used in diagnostic OAIQ, the method proposed here utilizes the therapy operating characteristic or TOC curves, which are plots of the probability of tumor control versus the probability of normal-tissue complications as the overall dose level of a radiotherapy treatment is varied. The proposed figure of merit is the area under the TOC curve, denoted AUTOC. This paper reviews an earlier exposition of the theory of TOC and AUTOC, which was specific to the assessment of image-segmentation algorithms, and extends it to other applications of imaging in external-beam radiation treatment as well as in treatment with internal radioactive sources. For each application, a methodology for computing the TOC is presented. A key difference between ROC and TOC is that the latter can be defined for a single patient rather than a population of patients.

  10. Does Teacher Quality Affect Student Achievement? An Empirical Study in Indonesia

    ERIC Educational Resources Information Center

    Sirait, Swando

    2016-01-01

    The objective of this study is to examine the relationship between teacher qualities in relation to student achievement in Indonesia. Teacher quality in this study defines as teacher evaluation score, in the areas of professional and pedagogic competency. The result of this study consonant to previous study that teacher quality, in term of teacher…

  11. Quality After-School Programming and Its Relationship to Achievement-Related Behaviors and Academic Performance

    ERIC Educational Resources Information Center

    Grassi, Annemarie M.

    2012-01-01

    The purpose of this study is to understand the relationship between quality social support networks developed through high quality afterschool programming and achievement amongst middle school and high school aged youth. This study seeks to develop a deeper understanding of how quality after-school programs influence a youth's developmental…

  12. The optimal polarizations for achieving maximum contrast in radar images

    NASA Technical Reports Server (NTRS)

    Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.

  13. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  14. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  15. Enhancement and quality control of GOES images

    NASA Astrophysics Data System (ADS)

    Jentoft-Nilsen, Marit; Palaniappan, Kannappan; Hasler, A. Frederick; Chesters, Dennis

    1996-10-01

    The new generation of Geostationary Operational Environmental Satellites (GOES) have an imager instrument with five multispectral bands of high spatial resolution,and very high dynamic range radiance measurements with 10-bit precision. A wide variety of environmental processes can be observed at unprecedented time scales using the new imager instrument. Quality assurance and feedback to the GOES project office is performed using rapid animation at high magnification, examining differences between successive frames, and applying radiometric and geometric correction algorithms. Missing or corrupted scanline data occur unpredictably due to noise in the ground based receiving system. Smooth high resolution noise-free animations can be recovered using automatic techniques even from scanline scratches affecting more than 25 percent of the dataset. Radiometric correction using the local solar zenith angle was applied to the visible channel to compensate for time- of-day illumination variations to produce gain-compensated movies that appear well-lit from dawn to dusk and extend the interval of useful image observations by more than two hours. A time series of brightness histograms displays some subtle quality control problems in the GOES channels related to rebinning of the radiance measurements. The human visual system is sensitive to only about half of the measured 10- bit dynamic range in intensity variations, at a given point in a monochrome image. In order to effectively use the additional bits of precision and handle the high data rate, new enhancement techniques and visualization tools were developed. We have implemented interactive image enhancement techniques to selectively emphasize different subranges of the 10-bits of intensity levels. Improving navigational accuracy using registration techniques and geometric correction of scanline interleaving errors is a more difficult problem that is currently being investigated.

  16. Low-Achieving Readers, High Expectations: Image Theatre Encourages Critical Literacy

    ERIC Educational Resources Information Center

    Rozansky, Carol Lloyd; Aagesen, Colleen

    2010-01-01

    Students in an eighth-grade, urban, low-achieving reading class were introduced to critical literacy through engagement in Image Theatre. Developed by liberatory dramatist Augusto Boal, Image Theatre gives participants the opportunity to examine texts in the triple role of interpreter, artist, and sculptor (i.e., image creator). The researchers…

  17. On pictures and stuff: image quality and material appearance

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2014-02-01

    Realistic images are a puzzle because they serve as visual representations of objects while also being objects themselves. When we look at an image we are able to perceive both the properties of the image and the properties of the objects represented by the image. Research on image quality has typically focused improving image properties (resolution, dynamic range, frame rate, etc.) while ignoring the issue of whether images are serving their role as visual representations. In this paper we describe a series of experiments that investigate how well images of different quality convey information about the properties of the objects they represent. In the experiments we focus on the effects that two image properties (contrast and sharpness) have on the ability of images to represent the gloss of depicted objects. We found that different experimental methods produced differing results. Specifically, when the stimulus images were presented using simultaneous pair comparison, observers were influenced by the surface properties of the images and conflated changes in image contrast and sharpness with changes in object gloss. On the other hand, when the stimulus images were presented sequentially, observers were able to disregard the image plane properties and more accurately match the gloss of the objects represented by the different quality images. These findings suggest that in understanding image quality it is useful to distinguish between quality of the imaging medium and the quality of the visual information represented by that medium.

  18. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  19. Teacher Quality in Educational Production. Tracking, Decay, and Student Achievement. NBER Working Paper No. 14442

    ERIC Educational Resources Information Center

    Rothstein, Jesse

    2008-01-01

    Growing concerns over the achievement of U.S. students have led to proposals to reward good teachers and penalize (or fire) bad ones. The leading method for assessing teacher quality is "value added" modeling (VAM), which decomposes students' test scores into components attributed to student heterogeneity and to teacher quality. Implicit in the…

  20. 77 FR 1687 - EPA Workshops on Achieving Water Quality Through Integrated Municipal Stormwater and Wastewater...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-11

    ... AGENCY EPA Workshops on Achieving Water Quality Through Integrated Municipal Stormwater and Wastewater Plans Under the Clean Water Act (CWA) AGENCY: Environmental Protection Agency (EPA). ACTION: Notice... water quality objectives of the CWA. The workshops are intended to assist EPA in developing...

  1. Academic Achievement, School Quality and Family Background: Study in Seven Latin American Countries.

    ERIC Educational Resources Information Center

    Sanguinetty, Jorge A.

    Educational production can be studied by correlating levels of academic achievement with three independent variables: student's family background, student's mental ability, and school quality. To examine family background and school quality, information was gathered from schools in Argentina, Bolivia, Brazil, Colombia, Mexico, Paraguay, and Peru.…

  2. Effects of Secondary School Students' Perceptions of Mathematics Education Quality on Mathematics Anxiety and Achievement

    ERIC Educational Resources Information Center

    Çiftçi, S. Koza

    2015-01-01

    The two aims of this study are as follows: (1) to compare the differences in mathematics anxiety and achievement in secondary school students according to their perceptions of the quality of their mathematics education via a cluster analysis and (2) to test the effects of the perception of mathematics education quality on anxiety and achievement…

  3. Digital mammography--DQE versus optimized image quality in clinical environment: an on site study

    NASA Astrophysics Data System (ADS)

    Oberhofer, Nadia; Fracchetti, Alessandro; Springeth, Margareth; Moroder, Ehrenfried

    2010-04-01

    The intrinsic quality of the detection system of 7 different digital mammography units (5 direct radiography DR; 2 computed radiography CR), expressed by DQE, has been compared with their image quality/dose performances in clinical use. DQE measurements followed IEC 62220-1-2 using a tungsten test object for MTF determination. For image quality assessment two different methods have been applied: 1) measurement of contrast to noise ratio (CNR) according to the European guidelines and 2) contrast-detail (CD) evaluation. The latter was carried out with the phantom CDMAM ver. 3.4 and the commercial software CDMAM Analyser ver. 1.1 (both Artinis) for automated image analysis. The overall image quality index IQFinv proposed by the software has been validated. Correspondence between the two methods has been shown figuring out a linear correlation between CNR and IQFinv. All systems were optimized with respect to image quality and average glandular dose (AGD) within the constraints of automatic exposure control (AEC). For each equipment, a good image quality level was defined by means of CD analysis, and the corresponding CNR value considered as target value. The goal was to achieve for different PMMA-phantom thicknesses constant image quality, that means the CNR target value, at minimum dose. All DR systems exhibited higher DQE and significantly better image quality compared to CR systems. Generally switching, where available, to a target/filter combination with an x-ray spectrum of higher mean energy permitted dose savings at equal image quality. However, several systems did not allow to modify the AEC in order to apply optimal radiographic technique in clinical use. The best ratio image quality/dose was achieved by a unit with a-Se detector and W anode only recently available on the market.

  4. Image quality characteristics of handheld display devices for medical imaging.

    PubMed

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2 × 10(-5) mm(2) at 1 mm(-1), while handheld displays have values lower than 3.7 × 10(-6) mm(2). Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  5. Image Quality Characteristics of Handheld Display Devices for Medical Imaging

    PubMed Central

    Yamazaki, Asumi; Liu, Peter; Cheng, Wei-Chung; Badano, Aldo

    2013-01-01

    Handheld devices such as mobile phones and tablet computers have become widespread with thousands of available software applications. Recently, handhelds are being proposed as part of medical imaging solutions, especially in emergency medicine, where immediate consultation is required. However, handheld devices differ significantly from medical workstation displays in terms of display characteristics. Moreover, the characteristics vary significantly among device types. We investigate the image quality characteristics of various handheld devices with respect to luminance response, spatial resolution, spatial noise, and reflectance. We show that the luminance characteristics of the handheld displays are different from those of workstation displays complying with grayscale standard target response suggesting that luminance calibration might be needed. Our results also demonstrate that the spatial characteristics of handhelds can surpass those of medical workstation displays particularly for recent generation devices. While a 5 mega-pixel monochrome workstation display has horizontal and vertical modulation transfer factors of 0.52 and 0.47 at the Nyquist frequency, the handheld displays released after 2011 can have values higher than 0.63 at the respective Nyquist frequencies. The noise power spectra for workstation displays are higher than 1.2×10−5 mm2 at 1 mm−1, while handheld displays have values lower than 3.7×10−6 mm2. Reflectance measurements on some of the handheld displays are consistent with measurements for workstation displays with, in some cases, low specular and diffuse reflectance coefficients. The variability of the characterization results among devices due to the different technological features indicates that image quality varies greatly among handheld display devices. PMID:24236113

  6. The influence of statistical variations on image quality

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror; Hertel, Dirk; Bullitt, Julian

    2006-01-01

    For more than thirty years imaging scientists have constructed metrics to predict psychovisually perceived image quality. Such metrics are based on a set of objectively measurable basis functions such as Noise Power Spectrum (NPS), Modulation Transfer Function (MTF), and characteristic curves of tone and color reproduction. Although these basis functions constitute a set of primitives that fully describe an imaging system from the standpoint of information theory, we found that in practical imaging systems the basis functions themselves are determined by system-specific primitives, i.e. technology parameters. In the example of a printer, MTF and NPS are largely determined by dot structure. In addition MTF is determined by color registration, and NPS by streaking and banding. Since any given imaging system is only a single representation of a class of more or less identical systems, the family of imaging systems and the single system are not described by a unique set of image primitives. For an image produced by a given imaging system, the set of image primitives describing that particular image will be a singular instantiation of the underlying statistical distribution of that primitive. If we know precisely the set of imaging primitives that describe the given image we should be able to predict its image quality. Since only the distributions are known, we can only predict the distribution in image quality for a given image as produced by the larger class of 'identical systems'. We will demonstrate the combinatorial effect of the underlying statistical variations in the image primitives on the objectively measured image quality of a population of printers as well as on the perceived image quality of a set of test images. We also will discuss the choice of test image sets and impact of scene content on the distribution of perceived image quality.

  7. SU-E-I-43: Pediatric CT Dose and Image Quality Optimization

    SciTech Connect

    Stevens, G; Singh, R

    2014-06-01

    Purpose: To design an approach to optimize radiation dose and image quality for pediatric CT imaging, and to evaluate expected performance. Methods: A methodology was designed to quantify relative image quality as a function of CT image acquisition parameters. Image contrast and image noise were used to indicate expected conspicuity of objects, and a wide-cone system was used to minimize scan time for motion avoidance. A decision framework was designed to select acquisition parameters as a weighted combination of image quality and dose. Phantom tests were used to acquire images at multiple techniques to demonstrate expected contrast, noise and dose. Anthropomorphic phantoms with contrast inserts were imaged on a 160mm CT system with tube voltage capabilities as low as 70kVp. Previously acquired clinical images were used in conjunction with simulation tools to emulate images at different tube voltages and currents to assess human observer preferences. Results: Examination of image contrast, noise, dose and tube/generator capabilities indicates a clinical task and object-size dependent optimization. Phantom experiments confirm that system modeling can be used to achieve the desired image quality and noise performance. Observer studies indicate that clinical utilization of this optimization requires a modified approach to achieve the desired performance. Conclusion: This work indicates the potential to optimize radiation dose and image quality for pediatric CT imaging. In addition, the methodology can be used in an automated parameter selection feature that can suggest techniques given a limited number of user inputs. G Stevens and R Singh are employees of GE Healthcare.

  8. Improving the Blanco Telescope's delivered image quality

    NASA Astrophysics Data System (ADS)

    Abbott, Timothy M. C.; Montane, Andrés; Tighe, Roberto; Walker, Alistair R.; Gregory, Brooke; Smith, R. Christopher; Cisternas, Alfonso

    2010-07-01

    The V. M. Blanco 4-m telescope at Cerro Tololo Inter-American Observatory is undergoing a number of improvements in preparation for the delivery of the Dark Energy Camera. The program includes upgrades having potential to deliver gains in image quality and stability. To this end, we have renovated the support structure of the primary mirror, incorporating innovations to improve both the radial support performance and the registration of the mirror and telescope top end. The resulting opto-mechanical condition of the telescope is described. We also describe some improvements to the environmental control. Upgrades to the telescope control system and measurements of the dome environment are described in separate papers in this conference.

  9. Depressive symptoms in third-grade teachers: relations to classroom quality and student achievement.

    PubMed

    McLean, Leigh; McDonald Connor, Carol

    2015-01-01

    This study investigated associations among third-grade teachers' (N = 27) symptoms of depression, quality of the classroom-learning environment (CLE), and students' (N = 523, Mage  = 8.6 years) math and literacy performance. teachers' depressive symptoms in the winter negatively predicted students' spring mathematics achievement. This depended on students' fall mathematics scores; students who began the year with weaker math skills and were in classrooms where teachers reported more depressive symptoms achieved smaller gains than did peers whose teachers reported fewer symptoms. teachers' depressive symptoms were negatively associated with quality of CLE, and quality of CLE mediated the association between depressive symptoms and student achievement. The findings point to the importance of teachers' mental health, with implications for policy and practice.

  10. Lesion insertion in projection domain for computed tomography image quality assessment

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Ma, Chi; Yu, Zhicong; Leng, Shuai; Yu, Lifeng; McCollough, Cynthia

    2015-03-01

    To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way to achieve this objective is to create hybrid images that combine patient images with simulated lesions. Because conventional hybrid images generated in the image-domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Liver lesion models were forward projected according to the geometry of a commercial CT scanner to acquire lesion projections. The lesion projections were then inserted into patient projections (decoded from commercial CT raw data with the assistance of the vendor) and reconstructed to acquire hybrid images. To validate the accuracy of the forward projection geometry, simulated images reconstructed from the forward projections of a digital ACR phantom were compared to physically acquired ACR phantom images. To validate the hybrid images, lesion models were inserted into patient images and visually assessed. Results showed that the simulated phantom images and the physically acquired phantom images had great similarity in terms of HU accuracy and high-contrast resolution. The lesions in the hybrid image had a realistic appearance and merged naturally into the liver background. In addition, the inserted lesion demonstrated reconstruction-parameter-dependent appearance. Compared to conventional image-domain approach, our method enables more realistic hybrid images for image quality assessment.

  11. Using short-wave infrared imaging for fruit quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    Quality evaluation of agricultural and food products is important for processing, inventory control, and marketing. Fruit size and surface quality are two important quality factors for high-quality fruit such as Medjool dates. Fruit size is usually measured by length that can be done easily by simple image processing techniques. Surface quality evaluation on the other hand requires more complicated design, both in image acquisition and image processing. Skin delamination is considered a major factor that affects fruit quality and its value. This paper presents an efficient histogram analysis and image processing technique that is designed specifically for real-time surface quality evaluation of Medjool dates. This approach, based on short-wave infrared imaging, provides excellent image contrast between the fruit surface and delaminated skin, which allows significant simplification of image processing algorithm and reduction of computational power requirements. The proposed quality grading method requires very simple training procedure to obtain a gray scale image histogram for each quality level. Using histogram comparison, each date is assigned to one of the four quality levels and an optimal threshold is calculated for segmenting skin delamination areas from the fruit surface. The percentage of the fruit surface that has skin delamination can then be calculated for quality evaluation. This method has been implemented and used for commercial production and proven to be efficient and accurate.

  12. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  13. Metric-based no-reference quality assessment of heterogeneous document images

    NASA Astrophysics Data System (ADS)

    Nayef, Nibal; Ogier, Jean-Marc

    2015-01-01

    No-reference image quality assessment (NR-IQA) aims at computing an image quality score that best correlates with either human perceived image quality or an objective quality measure, without any prior knowledge of reference images. Although learning-based NR-IQA methods have achieved the best state-of-the-art results so far, those methods perform well only on the datasets on which they were trained. The datasets usually contain homogeneous documents, whereas in reality, document images come from different sources. It is unrealistic to collect training samples of images from every possible capturing device and every document type. Hence, we argue that a metric-based IQA method is more suitable for heterogeneous documents. We propose a NR-IQA method with the objective quality measure of OCR accuracy. The method combines distortion-specific quality metrics. The final quality score is calculated taking into account the proportions of, and the dependency among different distortions. Experimental results show that the method achieves competitive results with learning-based NR-IQA methods on standard datasets, and performs better on heterogeneous documents.

  14. Wavelet image processing applied to optical and digital holography: past achievements and future challenges

    NASA Astrophysics Data System (ADS)

    Jones, Katharine J.

    2005-08-01

    The link between wavelets and optics goes back to the work of Dennis Gabor who both invented holography and developed Gabor decompositions. Holography involves 3-D images. Gabor decompositions involves 1-D signals. Gabor decompositions are the predecessors of wavelets. Wavelet image processing of holography, both optical holography and digital holography, will be examined with respect to past achievements and future challenges.

  15. The Relationship between University Students' Academic Achievement and Perceived Organizational Image

    ERIC Educational Resources Information Center

    Polat, Soner

    2011-01-01

    The purpose of present study was to determine the relationship between university students' academic achievement and perceived organizational image. The sample of the study was the senior students at the faculties and vocational schools in Umuttepe Campus at Kocaeli University. Because the development of organizational image is a long process, the…

  16. The physical and psychological factors governing sound-image quality

    NASA Astrophysics Data System (ADS)

    Kurozumi, K.; Ohgushi, K.

    1984-03-01

    One of the most important psychological impressions produced by a conventional two-loudspeakers reproduction-system is a localization of the sound image in the horizontal plane. The sound image is localized to some degree by varying the level and the time differences of the two acoustic signals. Even if the sound image was localized in the same direction, different impressions - for example, a feeling of the width of the sound image - are sometimes produced. These different impressions are explained by the phrase sound-image quality. The purpose is to find out the psychological and physical factors governing sound-image quality. To begin with, the effect on sound-image quality of varying the cross-correlation coefficient for white noise is investigated. A number of studies were performed in which the relationship between the cross-correlation coefficient and the sound-image quality were investigated.

  17. Reduced-reference image quality assessment using moment method

    NASA Astrophysics Data System (ADS)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  18. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    PubMed Central

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-01-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  19. Mister Sandman, bring me good marks! On the relationship between sleep quality and academic achievement.

    PubMed

    Baert, Stijn; Omey, Eddy; Verhaest, Dieter; Vermeir, Aurélie

    2015-04-01

    There is growing evidence that health factors affect tertiary education success in a causal way. This study assesses the effect of sleep quality on academic achievement at university. To this end, we surveyed 804 students about their sleep quality by means of the Pittsburgh Sleep Quality Index (PSQI) before the start of their first exam period in December 2013 at Ghent University. PSQI scores were merged with course marks in this exam period. Instrumenting PSQI scores by sleep quality during secondary education, we find that increasing total sleep quality with one standard deviation leads to 4.85 percentage point higher course marks. Based on this finding, we suggest that higher education providers might be incentivised to invest part of their resources for social facilities in professional support for students with sleep and other health problems.

  20. Friendship Quality and School Achievement: A Longitudinal Analysis during Primary School

    ERIC Educational Resources Information Center

    Zucchetti, Giulia; Candela, Filippo; Sacconi, Beatrice; Rabaglietti, Emanuela

    2015-01-01

    This study examined the longitudinal relationship between friendship quality (positive and negative) and school achievement among 228 school-age children (51% girls, M = 8.09, SD = 0.41). A three-wave cross-lagged analysis was used to determine the direction of influence between these domains across school years. Findings revealed that: (a) school…

  1. The Effect of the Adoption of the Quality Philosophy by Teachers on Student Achievement

    ERIC Educational Resources Information Center

    Sandifer, Cody Clark

    2009-01-01

    The purpose of this study was to determine if the adoption of the Deming philosophy by teachers and use of the LtoJ[R] process resulted in greater academic achievement. Results of internal consistency analysis indicated that the instrument, the "Commitment to Quality Inventory for Educators," was a reliable measure of the Deming philosophy for…

  2. Classroom Instructional Quality, Exposure to Mathematics Instruction and Mathematics Achievement in Fifth Grade

    ERIC Educational Resources Information Center

    Ottmar, Erin R.; Decker, Lauren E.; Cameron, Claire E.; Curby, Timothy W.; Rimm-Kaufman, Sara E.

    2014-01-01

    This study examined the quality of teacher-child interactions and exposure to mathematics instruction as predictors of 5th grade student's mathematics achievement. The sample was a subset of the children involved in the NICHD-SECC longitudinal study (N = 657). Results indicate that, even after controlling for student demographic…

  3. The Effects of Two Intervention Programs on Teaching Quality and Student Achievement

    ERIC Educational Resources Information Center

    Azkiyah, S. N.; Doolaard, Simone; Creemers, Bert P. M.; Van Der Werf, M. P. C.

    2014-01-01

    This paper compares the effectiveness of two interventions aimed to improve teaching quality and student achievement in Indonesia. The first intervention was the use of education standards, while the second one was the combination of education standards with a teacher improvement program. The study involved 50 schools, 52 teachers, and 1660…

  4. Transactional Relationships between Latinos' Friendship Quality and Academic Achievement during the Transition to Middle School

    ERIC Educational Resources Information Center

    Sebanc, Anne M.; Guimond, Amy B.; Lutgen, Jeff

    2016-01-01

    This study investigates whether friendship quality, academic achievement, and mastery goal orientation predict each other across the transition to middle school. Participants were 146 Latino students (75 girls) followed from the end of elementary school through the first year of middle school. Measures included positive and negative friendship…

  5. Social Capital, Human Capital and Parent-Child Relation Quality: Interacting for Children's Educational Achievement?

    ERIC Educational Resources Information Center

    von Otter, Cecilia; Stenberg, Sten-Åke

    2015-01-01

    We analyse the utility of social capital for children's achievement, and if this utility interacts with family human capital and the quality of the parent-child relationship. Our focus is on parental activities directly related to children's school work. Our data stem from a Swedish cohort born in 1953 and consist of both survey and register data.…

  6. Mathematics Teacher Quality: Its Distribution and Relationship with Student Achievement in Turkey

    ERIC Educational Resources Information Center

    Özel, Zeynep Ebrar Yetkiner; Özel, Serkan

    2013-01-01

    A main purpose of the present study was to investigate the distribution of qualified mathematics teachers in relation to students' socioeconomic status (SES), as measured by parental education, among Turkish middle schools. Further, relationships between mathematics teacher quality indicators and students' mathematics achievement were…

  7. The Relation among School District Health, Total Quality Principles for School Organization and Student Achievement

    ERIC Educational Resources Information Center

    Marshall, Jon; Pritchard, Ruie; Gunderson, Betsey

    2004-01-01

    The purpose of this study was to determine the congruence among W. E. Deming's 14 points for Total Quality Management (TQM), the organizational health of school districts, and student achievement. Based on Kanter's (1983) concept of a Culture of Pride with a Climate of Success, healthy districts were defined as having an organizational culture…

  8. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  9. Improving quality and reducing inequities: a challenge in achieving best care

    PubMed Central

    Nicewander, David A.; Qin, Huanying; Ballard, David J.

    2006-01-01

    The health care quality chasm is better described as a gulf for certain segments of the population, such as racial and ethnic minority groups, given the gap between actual care received and ideal or best care quality. The landmark Institute of Medicine report Crossing the Quality Chasm: A New Health System for the 21st Century challenges all health care organizations to pursue six major aims of health care improvement: safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness. “Equity” aims to ensure that quality care is available to all and that the quality of care provided does not differ by race, ethnicity, or other personal characteristics unrelated to a patient's reason for seeking care. Baylor Health Care System is in the unique position of being able to examine the current state of equity in a typical health care delivery system and to lead the way in health equity research. Its organizational vision, “culture of quality,” and involved leadership bode well for achieving equitable best care. However, inequities in access, use, and outcomes of health care must be scrutinized; the moral, ethical, and economic issues they raise and the critical injustice they create must be remedied if this goal is to be achieved. Eliminating any observed inequities in health care must be synergistically integrated with quality improvement. Quality performance indicators currently collected and evaluated indicate that Baylor Health Care System often performs better than the national average. However, there are significant variations in care by age, gender, race/ethnicity, and socioeconomic status that indicate the many remaining challenges in achieving “best care” for all. PMID:16609733

  10. Limitations to adaptive optics image quality in rodent eyes.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2012-08-01

    Adaptive optics (AO) retinal image quality of rodent eyes is inferior to that of human eyes, despite the promise of greater numerical aperture. This paradox challenges several assumptions commonly made in AO imaging, assumptions which may be invalidated by the very high power and dioptric thickness of the rodent retina. We used optical modeling to compare the performance of rat and human eyes under conditions that tested the validity of these assumptions. Results showed that AO image quality in the human eye is robust to positioning errors of the AO corrector and to differences in imaging depth and wavelength compared to the wavefront beacon. In contrast, image quality in the rat eye declines sharply with each of these manipulations, especially when imaging off-axis. However, some latitude does exist to offset these manipulations against each other to produce good image quality.

  11. Review of spectral imaging technology in biomedical engineering: achievements and challenges.

    PubMed

    Li, Qingli; He, Xiaofu; Wang, Yiting; Liu, Hongying; Xu, Dongrong; Guo, Fangmin

    2013-10-01

    Spectral imaging is a technology that integrates conventional imaging and spectroscopy to get both spatial and spectral information from an object. Although this technology was originally developed for remote sensing, it has been extended to the biomedical engineering field as a powerful analytical tool for biological and biomedical research. This review introduces the basics of spectral imaging, imaging methods, current equipment, and recent advances in biomedical applications. The performance and analytical capabilities of spectral imaging systems for biological and biomedical imaging are discussed. In particular, the current achievements and limitations of this technology in biomedical engineering are presented. The benefits and development trends of biomedical spectral imaging are highlighted to provide the reader with an insight into the current technological advances and its potential for biomedical research.

  12. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    SciTech Connect

    Vaishnav, J. Y. Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  13. The study of surgical image quality evaluation system by subjective quality factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard

    2016-03-01

    GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.

  14. Dynamic flat panel detector versus image intensifier in cardiac imaging: dose and image quality

    NASA Astrophysics Data System (ADS)

    Vano, E.; Geiger, B.; Schreiner, A.; Back, C.; Beissel, J.

    2005-12-01

    The practical aspects of the dosimetric and imaging performance of a digital x-ray system for cardiology procedures were evaluated. The system was configured with an image intensifier (II) and later upgraded to a dynamic flat panel detector (FD). Entrance surface air kerma (ESAK) to phantoms of 16, 20, 24 and 28 cm of polymethyl methacrylate (PMMA) and the image quality of a test object were measured. Images were evaluated directly on the monitor and with numerical methods (noise and signal-to-noise ratio). Information contained in the DICOM header for dosimetry audit purposes was also tested. ESAK values per frame (or kerma rate) for the most commonly used cine and fluoroscopy modes for different PMMA thicknesses and for field sizes of 17 and 23 cm for II, and 20 and 25 cm for FD, produced similar results in the evaluated system with both technologies, ranging between 19 and 589 µGy/frame (cine) and 5 and 95 mGy min-1 (fluoroscopy). Image quality for these dose settings was better for the FD version. The 'study dosimetric report' is comprehensive, and its numerical content is sufficiently accurate. There is potential in the future to set those systems with dynamic FD to lower doses than are possible in the current II versions, especially for digital cine runs, or to benefit from improved image quality.

  15. Color image quality assessment with biologically inspired feature and machine learning

    NASA Astrophysics Data System (ADS)

    Deng, Cheng; Tao, Dacheng

    2010-07-01

    In this paper, we present a new no-reference quality assessment metric for color images by using biologically inspired features (BIFs) and machine learning. In this metric, we first adopt a biologically inspired model to mimic the visual cortex and represent a color image based on BIFs which unifies color units, intensity units and C1 units. Then, in order to reduce the complexity and benefit the classification, the high dimensional features are projected to a low dimensional representation with manifold learning. Finally, a multiclass classification process is performed on this new low dimensional representation of the image and the quality assessment is based on the learned classification result in order to respect the one of the human observers. Instead of computing a final note, our method classifies the quality according to the quality scale recommended by the ITU. The preliminary results show that the developed metric can achieve good quality evaluation performance.

  16. Can pictorial images communicate the quality of pain successfully?

    PubMed Central

    Knapp, Peter; Morley, Stephen; Stones, Catherine

    2015-01-01

    Chronic pain is common and difficult for patients to communicate to health professionals. It may include neuropathic elements which require specialised treatment. A little used approach to communicating the quality of pain is through the use of images. This study aimed to test the ability of a set of 12 images depicting different sensory pain qualities to successfully communicate those qualities. Images were presented to 25 student nurses and 38 design students. Students were asked to write down words or phrases describing the quality of pain they felt was being communicated by each image. They were asked to provide as many or as few as occurred to them. The images were extremely heterogeneous in their ability to convey qualities of pain accurately. Only 2 of the 12 images were correctly interpreted by more than 70% of the sample. There was a significant difference between the two student groups, with nurses being significantly better at interpreting the images than the design students. Clearly, attention needs to be given not only to the content of images designed to depict the sensory qualities of pain but also to the differing audiences who may use them. Education, verbal ability, ethnicity and a multiplicity of other factors may influence the understanding and use of such images. Considerable work is needed to develop a set of images which is sufficiently culturally appropriate and effective for general use. PMID:26516574

  17. Investigation into the impact of tone reproduction on the perceived image quality of fine art reproductions

    NASA Astrophysics Data System (ADS)

    Farnand, Susan; Jiang, Jun; Frey, Franziska

    2012-01-01

    A project, supported by the Andrew W. Mellon Foundation, evaluating current practices in fine art image reproduction, determining the image quality generally achievable, and establishing a suggested framework for art image interchange was recently completed. (Information regarding the Mellon project and related work may be found at www.artimaging.rit.edu.) To determine the image quality currently being achieved, experimentation was conducted in which a set of objective targets and pieces of artwork in various media were imaged by participating museums and other cultural heritage institutions. Prints and images for display made from the delivered image files at the Rochester Institute of Technology were used as stimuli in psychometric testing in which observers were asked to evaluate the prints as reproductions of the original artwork and as stand alone images. The results indicated that there were limited differences between assessments made with and without the original present for printed reproductions. For displayed images, the differences were more significant with lower contrast images being ranked lower and higher contrast images generally ranked higher when the original was not present. This was true for experiments conducted both in a dimly lit laboratory as well as via the web, indicating that more than viewing conditions were driving this shift.

  18. Improving the Quality of Imaging in the Emergency Department.

    PubMed

    Blackmore, C Craig; Castro, Alexandra

    2015-12-01

    Imaging is critical for the care of emergency department (ED) patients. However, much of the imaging performed for acute care today is overutilization, creating substantial cost without significant benefit. Further, the value of imaging is not easily defined, as imaging only affects outcomes indirectly, through interaction with treatment. Improving the quality, including appropriateness, of emergency imaging requires understanding of how imaging contributes to patient care. The six-tier efficacy hierarchy of Fryback and Thornbury enables understanding of the value of imaging on multiple levels, ranging from technical efficacy to medical decision-making and higher-level patient and societal outcomes. The imaging efficacy hierarchy also allows definition of imaging quality through the Institute of Medicine (IOM)'s quality domains of safety, effectiveness, patient-centeredness, timeliness, efficiency, and equitability and provides a foundation for quality improvement. In this article, the authors elucidate the Fryback and Thornbury framework to define the value of imaging in the ED and to relate emergency imaging to the IOM quality domains.

  19. Quaternion structural similarity: a new quality index for color images.

    PubMed

    Kolaman, Amir; Yadid-Pecht, Orly

    2012-04-01

    One of the most important issues for researchers developing image processing algorithms is image quality. Methodical quality evaluation, by showing images to several human observers, is slow, expensive, and highly subjective. On the other hand, a visual quality matrix (VQM) is a fast, cheap, and objective tool for evaluating image quality. Although most VQMs are good in predicting the quality of an image degraded by a single degradation, they poorly perform for a combination of two degradations. An example for such degradation is the color crosstalk (CTK) effect, which introduces blur with desaturation. CTK is expected to become a bigger issue in image quality as the industry moves toward smaller sensors. In this paper, we will develop a VQM that will be able to better evaluate the quality of an image degraded by a combined blur/desaturation degradation and perform as well as other VQMs on single degradations such as blur, compression, and noise. We show why standard scalar techniques are insufficient to measure a combined blur/desaturation degradation and explain why a vectorial approach is better suited. We introduce quaternion image processing (QIP), which is a true vectorial approach and has many uses in the fields of physics and engineering. Our new VQM is a vectorial expansion of structure similarity using QIP, which gave it its name-Quaternion Structural SIMilarity (QSSIM). We built a new database of a combined blur/desaturation degradation and conducted a quality survey with human subjects. An extensive comparison between QSSIM and other VQMs on several image quality databases-including our new database-shows the superiority of this new approach in predicting visual quality of color images.

  20. Effect of image quality on calcification detection in digital mammography

    PubMed Central

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  1. Effect of image quality on calcification detection in digital mammography

    SciTech Connect

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-06-15

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  2. A new assessment method for image fusion quality

    NASA Astrophysics Data System (ADS)

    Li, Liu; Jiang, Wanying; Li, Jing; Yuchi, Ming; Ding, Mingyue; Zhang, Xuming

    2013-03-01

    Image fusion quality assessment plays a critically important role in the field of medical imaging. To evaluate image fusion quality effectively, a lot of assessment methods have been proposed. Examples include mutual information (MI), root mean square error (RMSE), and universal image quality index (UIQI). These image fusion assessment methods could not reflect the human visual inspection effectively. To address this problem, we have proposed a novel image fusion assessment method which combines the nonsubsampled contourlet transform (NSCT) with the regional mutual information in this paper. In this proposed method, the source medical images are firstly decomposed into different levels by the NSCT. Then the maximum NSCT coefficients of the decomposed directional images at each level are obtained to compute the regional mutual information (RMI). Finally, multi-channel RMI is computed by the weighted sum of the obtained RMI values at the various levels of NSCT. The advantage of the proposed method lies in the fact that the NSCT can represent image information using multidirections and multi-scales and therefore it conforms to the multi-channel characteristic of human visual system, leading to its outstanding image assessment performance. The experimental results using CT and MRI images demonstrate that the proposed assessment method outperforms such assessment methods as MI and UIQI based measure in evaluating image fusion quality and it can provide consistent results with human visual assessment.

  3. The effect of image sharpness on quantitative eye movement data and on image quality evaluation while viewing natural images

    NASA Astrophysics Data System (ADS)

    Vuori, Tero; Olkkonen, Maria

    2006-01-01

    The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.

  4. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  5. Toward a Blind Deep Quality Evaluator for Stereoscopic Images Based on Monocular and Binocular Interactions.

    PubMed

    Shao, Feng; Tian, Weijun; Lin, Weisi; Jiang, Gangyi; Dai, Qionghai

    2016-05-01

    During recent years, blind image quality assessment (BIQA) has been intensively studied with different machine learning tools. Existing BIQA metrics, however, do not design for stereoscopic images. We believe this problem can be resolved by separating 3D images and capturing the essential attributes of images via deep neural network. In this paper, we propose a blind deep quality evaluator (DQE) for stereoscopic images (denoted by 3D-DQE) based on monocular and binocular interactions. The key technical steps in the proposed 3D-DQE are to train two separate 2D deep neural networks (2D-DNNs) from 2D monocular images and cyclopean images to model the process of monocular and binocular quality predictions, and combine the measured 2D monocular and cyclopean quality scores using different weighting schemes. Experimental results on four public 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment. PMID:26960225

  6. Automated quality assessment in three-dimensional breast ultrasound images.

    PubMed

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects. PMID:27158633

  7. An easily-achieved time-domain beamformer for ultrafast ultrasound imaging based on compressive sensing.

    PubMed

    Wang, Congzhi; Peng, Xi; Liang, Dong; Xiao, Yang; Qiu, Weibao; Qian, Ming; Zheng, Hairong

    2015-01-01

    In ultrafast ultrasound imaging technique, how to maintain the high frame rate, and at the same time to improve the image quality as far as possible, has become a significant issue. Several novel beamforming methods based on compressive sensing (CS) theory have been proposed in previous literatures, but all have their own limitations, such as the excessively large memory consumption and the errors caused by the short-time discrete Fourier transform (STDFT). In this study, a novel CS-based time-domain beamformer for plane-wave ultrasound imaging is proposed and its image quality has been verified to be better than the traditional DAS method and even the popular coherent compounding method on several simulated phantoms. Comparing to the existing CS method, the memory consumption of our method is significantly reduced since the encoding matrix can be sparse-expressed. In addition, the time-delay calculations of the echo signals are directly accomplished in time-domain with a dictionary concept, avoiding the errors induced by the short-time Fourier translation calculation in those frequency-domain methods. The proposed method can be easily implemented on some low-cost hardware platforms, and can obtain ultrasound images with both high frame rate and good image quality, which make it has a great potential for clinical application.

  8. Testing scanners for the quality of output images

    NASA Astrophysics Data System (ADS)

    Concepcion, Vicente P.; Nadel, Lawrence D.; D'Amato, Donald P.

    1995-01-01

    Document scanning is the means through which documents are converted to their digital image representation for electronic storage or distribution. Among the types of documents being scanned by government agencies are tax forms, patent documents, office correspondence, mail pieces, engineering drawings, microfilm, archived historical papers, and fingerprint cards. Increasingly, the resulting digital images are used as the input for further automated processing including: conversion to a full-text-searchable representation via machine printed or handwritten (optical) character recognition (OCR), postal zone identification, raster-to-vector conversion, and fingerprint matching. These diverse document images may be bi-tonal, gray scale, or color. Spatial sampling frequencies range from about 200 pixels per inch to over 1,000. The quality of the digital images can have a major effect on the accuracy and speed of any subsequent automated processing, as well as on any human-based processing which may be required. During imaging system design, there is, therefore, a need to specify the criteria by which image quality will be judged and, prior to system acceptance, to measure the quality of images produced. Unfortunately, there are few, if any, agreed-upon techniques for measuring document image quality objectively. In the output images, it is difficult to distinguish image degradation caused by the poor quality of the input paper or microfilm from that caused by the scanning system. We propose several document image quality criteria and have developed techniques for their measurement. These criteria include spatial resolution, geometric image accuracy, (distortion), gray scale resolution and linearity, and temporal and spatial uniformity. The measurement of these criteria requires scanning one or more test targets along with computer-based analyses of the test target images.

  9. No-reference visual quality assessment for image inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  10. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  11. Method and tool for generating and managing image quality allocations through the design and development process

    NASA Astrophysics Data System (ADS)

    Sparks, Andrew W.; Olson, Craig; Theisen, Michael J.; Addiego, Chris J.; Hutchins, Tiffany G.; Goodman, Timothy D.

    2016-05-01

    Performance models for infrared imaging systems require image quality parameters; optical design engineers need image quality design goals; systems engineers develop image quality allocations to test imaging systems against. It is a challenge to maintain consistency and traceability amongst the various expressions of image quality. We present a method and parametric tool for generating and managing expressions of image quality during the system modeling, requirements specification, design, and testing phases of an imaging system design and development project.

  12. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  13. Latest achievements on MCT IR detectors for space and science imaging

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Castelein, P.; Cervera, C.; Baier, N.; Lobre, C.; De Borniol, E.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.; Chorier, P.

    2016-05-01

    HgCdTe (MCT) is a very versatile material for IR detection. Indeed, the ability to tailor the cutoff frequency as close as possible to the detection needs makes it a perfect candidate for high performance detection in a wide range of applications and spectral ranges. Moreover, the high quality material available today, either by liquid phase epitaxy (LPE) or molecular beam epitaxy (MBE) allows for very low dark currents at low temperatures and make it suitable for very low flux detection application such as science imaging. MCT has also demonstrated its robustness to aggressive space environment and faces therefore a large demand for space application such as staring at the outer space for science purposes in which case, the detected photon number is very low This induces very strong constrains onto the detector: low dark current, low noise, low persistence, (very) large focal plane arrays. The MCT diode structure adapted to fulfill those requirements is naturally the p/n photodiode. Following the developments of this technology made at DEFIR and transferred to Sofradir in MWIR and LWIR ranges for tactical applications, our laboratory has consequently investigated its adaptation for ultra-low flux in different spectral bands, in collaboration with the CEA Astrophysics lab. Another alternative for ultra low flux applications in SWIR range, has also been investigated with low excess noise MCT n/p avalanche photodiodes (APD). Those APDs may in some cases open the gate to sub electron noise IR detection.. This paper will review the latest achievements obtained on this matter at DEFIR (CEA-LETI and Sofradir common laboratory) from the short wave (SWIR) band detection for classical astronomical needs, to the long wave (LWIR) band for exoplanet transit spectroscopy, up to the very long waves (VLWIR) band.

  14. HgCdTe Detectors for Space and Science Imaging: General Issues and Latest Achievements

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Cervera, C.; Baier, N.; Lobre, C.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.

    2016-09-01

    HgCdTe (MCT) is a very versatile material system for infrared (IR) detection, suitable for high performance detection in a wide range of applications and spectral ranges. Indeed, the ability to tailor the cutoff frequency as close as possible to the needs makes it a perfect candidate for high performance detection. Moreover, the high quality material available today, grown either by molecular beam epitaxy or liquid phase epitaxy, allows for very low dark currents at low temperatures, suitable for low flux detection applications such as science imaging. MCT has also demonstrated robustness to the aggressive environment of space and faces, therefore, a large demand for space applications. A satellite may stare at the earth, in which case detection usually involves a lot of photons, called a high flux scenario. Alternatively, a satellite may stare at outer space for science purposes, in which case the detected photon number is very low, leading to low flux scenarios. This latter case induces very strong constraints onto the detector: low dark current, low noise, (very) large focal plane arrays. The classical structure used to fulfill those requirements are usually p/ n MCT photodiodes. This type of structure has been deeply investigated in our laboratory for different spectral bands, in collaboration with the CEA Astrophysics lab. However, another alternative may also be investigated with low excess noise: MCT n/ p avalanche photodiodes (APD). This paper reviews the latest achievements obtained on this matter at DEFIR (LETI and Sofradir common laboratory) from the short wave infrared (SWIR) band detection for classical astronomical needs, to long wave infrared (LWIR) band for exoplanet transit spectroscopy, up to very long wave infrared (VLWIR) bands. The different available diode architectures ( n/ p VHg or p/ n, or even APDs) are reviewed, including different available ROIC architectures for low flux detection.

  15. Interplay between JPEG-2000 image coding and quality estimation

    NASA Astrophysics Data System (ADS)

    Pinto, Guilherme O.; Hemami, Sheila S.

    2013-03-01

    Image quality and utility estimators aspire to quantify the perceptual resemblance and the usefulness of a distorted image when compared to a reference natural image, respectively. Image-coders, such as JPEG-2000, traditionally aspire to allocate the available bits to maximize the perceptual resemblance of the compressed image when compared to a reference uncompressed natural image. Specifically, this can be accomplished by allocating the available bits to minimize the overall distortion, as computed by a given quality estimator. This paper applies five image quality and utility estimators, SSIM, VIF, MSE, NICE and GMSE, within a JPEG-2000 encoder for rate-distortion optimization to obtain new insights on how to improve JPEG-2000 image coding for quality and utility applications, as well as to improve the understanding about the quality and utility estimators used in this work. This work develops a rate-allocation algorithm for arbitrary quality and utility estimators within the Post- Compression Rate-Distortion Optimization (PCRD-opt) framework in JPEG-2000 image coding. Performance of the JPEG-2000 image coder when used with a variety of utility and quality estimators is then assessed. The estimators fall into two broad classes, magnitude-dependent (MSE, GMSE and NICE) and magnitudeindependent (SSIM and VIF). They further differ on their use of the low-frequency image content in computing their estimates. The impact of these computational differences is analyzed across a range of images and bit rates. In general, performance of the JPEG-2000 coder below 1.6 bits/pixel with any of these estimators is highly content dependent, with the most relevant content being the amount of texture in an image and whether the strongest gradients in an image correspond to the main contours of the scene. Above 1.6 bits/pixel, all estimators produce visually equivalent images. As a result, the MSE estimator provides the most consistent performance across all images, while specific

  16. A quantitative method for visual phantom image quality evaluation

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.; Liu, Xiong; O'Shea, Michael; Toto, Lawrence C.

    2000-04-01

    This work presents an image quality evaluation technique for uniform-background target-object phantom images. The Degradation-Comparison-Threshold (DCT) method involves degrading the image quality of a target-containing region with a blocking processing and comparing the resulting image to a similarly degraded target-free region. The threshold degradation needed for 92% correct detection of the target region is the image quality measure of the target. Images of American College of Radiology (ACR) mammography accreditation program phantom were acquired under varying x-ray conditions on a digital mammography machine. Five observers performed ACR and DCT evaluations of the images. A figure-of-merit (FOM) of an evaluation method was defined which takes into account measurement noise and the change of the measure as a function of x-ray exposure to the phantom. The FOM of the DCT method was 4.1 times that of the ACR method for the specks, 2.7 times better for the fibers and 1.4 times better for the masses. For the specks, inter-reader correlations on the same image set increased significantly from 87% for the ACR method to 97% for the DCT method. The viewing time per target for the DCT method was 3 - 5 minutes. The observed greater sensitivity of the DCT method could lead to more precise Quality Control (QC) testing of digital images, which should improve the sensitivity of the QC process to genuine image quality variations. Another benefit of the method is that it can measure the image quality of high detectability target objects, which is impractical by existing methods.

  17. MEO based secured, robust, high capacity and perceptual quality image watermarking in DWT-SVD domain.

    PubMed

    Gunjal, Baisa L; Mali, Suresh N

    2015-01-01

    The aim of this paper is to present multiobjective evolutionary optimizer (MEO) based highly secured and strongly robust image watermarking technique using discrete wavelet transform (DWT) and singular value decomposition (SVD). Many researchers have failed to achieve optimization of perceptual quality and robustness with high capacity watermark embedding. Here, we achieved optimized peak signal to noise ratio (PSNR) and normalized correlation (NC) using MEO. Strong security is implemented through eight different security levels including watermark scrambling by Fibonacci-Lucas transformation (FLT). Haar wavelet is selected for DWT decomposition to compare practical performance of wavelets from different wavelet families. The technique is non-blind and tested with cover images of size 512x512 and grey scale watermark of size 256x256. The achieved perceptual quality in terms of PSNR is 79.8611dBs for Lena, 87.8446 dBs for peppers and 93.2853 dBs for lake images by varying scale factor K1 from 1 to 5. All candidate images used for testing namely Lena, peppers and lake images show exact recovery of watermark giving NC equals to 1. The robustness is tested against variety of attacks on watermarked image. The experimental demonstration proved that proposed method gives NC more than 0.96 for majority of attacks under consideration. The performance evaluation of this technique is found superior to all existing hybrid image watermarking techniques under consideration.

  18. MEO based secured, robust, high capacity and perceptual quality image watermarking in DWT-SVD domain.

    PubMed

    Gunjal, Baisa L; Mali, Suresh N

    2015-01-01

    The aim of this paper is to present multiobjective evolutionary optimizer (MEO) based highly secured and strongly robust image watermarking technique using discrete wavelet transform (DWT) and singular value decomposition (SVD). Many researchers have failed to achieve optimization of perceptual quality and robustness with high capacity watermark embedding. Here, we achieved optimized peak signal to noise ratio (PSNR) and normalized correlation (NC) using MEO. Strong security is implemented through eight different security levels including watermark scrambling by Fibonacci-Lucas transformation (FLT). Haar wavelet is selected for DWT decomposition to compare practical performance of wavelets from different wavelet families. The technique is non-blind and tested with cover images of size 512x512 and grey scale watermark of size 256x256. The achieved perceptual quality in terms of PSNR is 79.8611dBs for Lena, 87.8446 dBs for peppers and 93.2853 dBs for lake images by varying scale factor K1 from 1 to 5. All candidate images used for testing namely Lena, peppers and lake images show exact recovery of watermark giving NC equals to 1. The robustness is tested against variety of attacks on watermarked image. The experimental demonstration proved that proposed method gives NC more than 0.96 for majority of attacks under consideration. The performance evaluation of this technique is found superior to all existing hybrid image watermarking techniques under consideration. PMID:25830081

  19. Perceived no reference image quality measurement for chromatic aberration

    NASA Astrophysics Data System (ADS)

    Lamb, Anupama B.; Khambete, Madhuri

    2016-03-01

    Today there is need for no reference (NR) objective perceived image quality measurement techniques as conducting subjective experiments and making reference image available is a very difficult task. Very few NR perceived image quality measurement algorithms are available for color distortions like chromatic aberration (CA), color quantization with dither, and color saturation. We proposed NR image quality assessment (NR-IQA) algorithms for images distorted with CA. CA is mostly observed in images taken with digital cameras, having higher sensor resolution with inexpensive lenses. We compared our metric performance with two state-of-the-art NR blur techniques, one full reference IQA technique and three general-purpose NR-IQA techniques, although they are not tailored for CA. We used a CA dataset in the TID-2013 color image database to evaluate performance. Proposed algorithms give comparable performance with state-of-the-art techniques in terms of performance parameters and outperform them in terms of monotonicity and computational complexity. We have also discovered that the proposed CA algorithm best predicts perceived image quality of images distorted with realistic CA.

  20. Figure of Image Quality and Information Capacity in Digital Mammography

    PubMed Central

    Michail, Christos M.; Kalyvas, Nektarios E.; Valais, Ioannis G.; Fudos, Ioannis P.; Fountos, George P.; Dimitropoulos, Nikos; Kandarakis, Ioannis S.

    2014-01-01

    Objectives. In this work, a simple technique to assess the image quality characteristics of the postprocessed image is developed and an easy to use figure of image quality (FIQ) is introduced. This FIQ characterizes images in terms of resolution and noise. In addition information capacity, defined within the context of Shannon's information theory, was used as an overall image quality index. Materials and Methods. A digital mammographic image was postprocessed with three digital filters. Resolution and noise were calculated via the Modulation Transfer Function (MTF), the coefficient of variation, and the figure of image quality. In addition, frequency dependent parameters such as the noise power spectrum (NPS) and noise equivalent quanta (NEQ) were estimated and used to assess information capacity. Results. FIQs for the “raw image” data and the image processed with the “sharpen edges” filter were found 907.3 and 1906.1, correspondingly. The information capacity values were 60.86 × 103 and 78.96 × 103 bits/mm2. Conclusion. It was found that, after the application of the postprocessing techniques (even commercial nondedicated software) on the raw digital mammograms, MTF, NPS, and NEQ are improved for medium to high spatial frequencies leading to resolving smaller structures in the final image. PMID:24895593

  1. The use of modern electronic flat panel devices for image guided radiation therapy:. Image quality comparison, intra fraction motion monitoring and quality assurance applications

    NASA Astrophysics Data System (ADS)

    Nill, S.; Stützel, J.; Häring, P.; Oelfke, U.

    2008-06-01

    With modern radiotherapy delivery techniques like intensity modulated radiotherapy (IMRT) it is possible to delivery a more conformal dose distribution to the tumor while better sparing the organs at risk (OAR) compared to 3D conventional radiation therapy. Due to the theoretically high dose conformity achievable it is very important to know the exact position of the target volume during the treatment. With more and more modern linear accelerators equipped with imaging devices this is now almost possible. These imaging devices are using energies between 120kV and 6MV and therefore different detector systems are used but the vast majority is using amorphous silicon flat panel devices with different scintilator screens and build up materials. The technical details and the image quality of these systems are discussed and first results of the comparison are presented. In addition new methods to deal with motion management and quality assurance procedures are shortly discussed.

  2. Quality improvement in diabetes--successful in achieving better care with hopes for prevention.

    PubMed

    Haw, J Sonya; Narayan, K M Venkat; Ali, Mohammed K

    2015-09-01

    Diabetes affects 29 million Americans and is associated with billions of dollars in health expenditures and lost productivity. Robust evidence has shown that lifestyle interventions in people at high risk for diabetes and comprehensive management of cardiometabolic risk factors like glucose, blood pressure, and lipids can delay the onset of diabetes and its complications, respectively. However, realizing the "triple aim" of better health, better care, and lower cost in diabetes has been hampered by low adoption of lifestyle interventions to prevent diabetes and poor achievement of care goals for those with diabetes. To achieve better care, a number of quality improvement (QI) strategies targeting the health system, healthcare providers, and/or patients have been evaluated in both controlled trials and real-world programs, and have shown some successes, though barriers still impede wider adoption, effectiveness, real-world feasibility, and scalability. Here, we summarize the effectiveness and cost-effectiveness data regarding QI strategies in diabetes care and discuss the potential role of quality monitoring and QI in trying to implement primary prevention of diabetes more widely and effectively. Over time, achieving better care and better health will likely help bend the ever-growing cost curve. PMID:26495771

  3. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    SciTech Connect

    Morrow, Natalya V.; Lawton, Colleen A.; Qi, X. Sharon; Li, X. Allen

    2012-04-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  4. Achievable spatial resolution of time-resolved transillumination imaging systems which utilize multiply scattered light

    NASA Astrophysics Data System (ADS)

    Moon, J. A.; Battle, P. R.; Bashkansky, M.; Mahon, R.; Duncan, M. D.; Reintjes, J.

    1996-01-01

    We describe theoretically and measure experimentally the best achievable time-dependent point-spread-function of light in the presence of strong turbidity. We employ the rescaled isotropic-scattering solution to the time-dependent radiative transfer equation to examine three mathematically distinct limits of photonic transport: the ballistic, quasidiffuse, and diffuse limits. In all cases we follow the constraint that a minimum fractional number of launched photons must be received before the time-integrating detector is turned off. We show how the achievable ballistic resolution maps into the diffusion-limited achievable resolution, and verify this behavior experimentally by using a coherently amplified Raman polarization gate imaging system. We are able to quantitatively fit the measured best achievable resolution by empirically rescaling the scattering length in the model.

  5. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  6. Image quality evaluation and control of computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Hiroshi; Yamaguchi, Takeshi; Uetake, Hiroki

    2016-03-01

    Image quality of the computer-generated holograms are usually evaluated subjectively. For example, the re- constructed image from the hologram is compared with other holograms, or evaluated by the double-stimulus impairment scale method to compare with the original image. This paper proposes an objective image quality evaluation of a computer-generated hologram by evaluating both diffraction efficiency and peak signal-to-noise ratio. Theory and numerical experimental results are shown on Fourier transform transmission hologram of both amplitude and phase modulation. Results without the optimized random phase show that the amplitude transmission hologram gives better peak signal-to noise ratio, but the phase transmission hologram provides about 10 times higher diffraction efficiency to the amplitude type. As an optimized phase hologram, Kinoform is evaluated. In addition, we investigate to control image quality by non-linear operation.

  7. Quantity and Quality of Computer Use and Academic Achievement: Evidence from a Large-Scale International Test Program

    ERIC Educational Resources Information Center

    Cheema, Jehanzeb R.; Zhang, Bo

    2013-01-01

    This study looked at the effect of both quantity and quality of computer use on achievement. The Program for International Student Assessment (PISA) 2003 student survey comprising of 4,356 students (boys, n = 2,129; girls, n = 2,227) was used to predict academic achievement from quantity and quality of computer use while controlling for…

  8. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  9. Dose reduction and image quality optimizations in CT of pediatric and adult patients: phantom studies

    NASA Astrophysics Data System (ADS)

    Jeon, P.-H.; Lee, C.-L.; Kim, D.-H.; Lee, Y.-J.; Jeon, S.-S.; Kim, H.-J.

    2014-03-01

    Multi-detector computed tomography (MDCT) can be used to easily and rapidly perform numerous acquisitions, possibly leading to a marked increase in the radiation dose to individual patients. Technical options dedicated to automatically adjusting the acquisition parameters according to the patient's size are of specific interest in pediatric radiology. A constant tube potential reduction can be achieved for adults and children, while maintaining a constant detector energy fluence. To evaluate radiation dose, the weighted CT dose index (CTDIw) was calculated based on the CT dose index (CTDI) measured using an ion chamber, and image noise and image contrast were measured from a scanned image to evaluate image quality. The dose-weighted contrast-to-noise ratio (CNRD) was calculated from the radiation dose, image noise, and image contrast measured from a scanned image. The noise derivative (ND) is a quality index for dose efficiency. X-ray spectra with tube voltages ranging from 80 to 140 kVp were used to compute the average photon energy. Image contrast and the corresponding contrast-to-noise ratio (CNR) were determined for lesions of soft tissue, muscle, bone, and iodine relative to a uniform water background, as the iodine contrast increases at lower energy (i.e., k-edge of iodine is 33 keV closer to the beam energy) using mixed water-iodine contrast normalization (water 0, iodine 25, 100, 200, and 1000 HU, respectively). The proposed values correspond to high quality images and can be reduced if only high-contrast organs are assessed. The potential benefit of lowering the tube voltage is an improved CNRD, resulting in a lower radiation dose and optimization of image quality. Adjusting the tube potential in abdominal CT would be useful in current pediatric radiography, where the choice of X-ray techniques generally takes into account the size of the patient as well as the need to balance the conflicting requirements of diagnostic image quality and radiation dose

  10. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  11. Achieving "Final Storage Quality" of municipal solid waste in pilot scale bioreactor landfills.

    PubMed

    Valencia, R; van der Zon, W; Woelders, H; Lubberding, H J; Gijzen, H J

    2009-01-01

    Entombed waste in current sanitary landfills will generate biogas and leachate when physical barriers fail in the future, allowing the intrusion of moisture into the waste mass contradicting the precepts of the sustainability concept. Bioreactor landfills are suggested as a sustainable option to achieve Final Storage Quality (FSQ) status of waste residues; however, it is not clear what characteristics the residues should have in order to stop operation and after-care monitoring schemes. An experiment was conducted to determine the feasibility to achieve FSQ status (Waste Acceptance Criteria of the European Landfill Directive) of residues in a pilot scale bioreactor landfill. The results of the leaching test were very encouraging due to their proximity to achieve the proposed stringent FSQ criterion after 2 years of operation. Furthermore, residues have the same characteristics of alternative waste stabilisation parameters (low BMP, BOD/COD ratio, VS content, SO4(2-)/Cl- ratio) established by other researchers. Mass balances showed that the bioreactor landfill simulator was capable of practically achieving biological stabilisation after 2 years of operation, while releasing approximately 45% of the total available (organic and inorganic) carbon and nitrogen into the liquid and gas phases.

  12. Sci—Fri AM: Mountain — 02: A comparison of dose reduction methods on image quality for cone beam CT

    SciTech Connect

    Webb, R; Buckley, LA

    2014-08-15

    Modern radiotherapy uses highly conformai dose distributions and therefore relies on daily image guidance for accurate patient positioning. Kilovoltage cone beam CT is one technique that is routinely used for patient set-up and results in a high dose to the patient relative to planar imaging techniques. This study uses an Elekta Synergy linac equipped with XVI cone beam CT to investigate the impact of various imaging parameters on dose and image quality. Dose and image quality are assessed as functions of x-ray tube voltage, tube current and the number of projections in the scan. In each case, the dose measurements confirm that as each parameter increases the dose increases. The assessment of high contrast resolution shows little dependence on changes to the image technique. However, low contrast visibility suggests a trade off between dose and image quality. Particularly for changes in tube potential, the dose increases much faster as a function of voltage than the corresponding increase in low contrast image quality. This suggests using moderate values of the peak tube voltage (100 – 120 kVp) since higher values result in significant dose increases with little gain in image quality. Measurements also indicate that increasing tube current achieves the greatest degree of improvement in the low contrast visibility. The results of this study highlight the need to establish careful imaging protocols to limit dose to the patient and to limit changes to the imaging parameters to those cases where there is a clear clinical requirement for improved image quality.

  13. Importance of the grayscale in early assessment of image quality gains with iterative CT reconstruction

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Guo, Z.

    2016-03-01

    Iterative reconstruction methods have become an important research topic in X-ray computed tomography (CT), due to their ability to yield improvements in image quality in comparison with the classical filtered bacprojection method. There are many ways to design an effective iterative reconstruction method. Moreover, for each design, there may be a large number of parameters that can be adjusted. Thus, early assessment of image quality, before clinical deployment, plays a large role in identifying and refining solutions. Currently, there are few publications reporting on early, task-based assessment of image quality achieved with iterative reconstruction methods. We report here on such an assessment, and we illustrate at the same time the importance of the grayscale used for image display when conducting this type of assessment. Our results further support observations made by others that the edge preserving penalty term used in iterative reconstruction is a key ingredient to improving image quality in terms of detection task. Our results also provide a clear demonstration of an implication made in one of our previous publications, namely that the grayscale window plays an important role in image quality comparisons involving iterative CT reconstruction methods.

  14. Fixed-quality/variable bit-rate on-board image compression for future CNES missions

    NASA Astrophysics Data System (ADS)

    Camarero, Roberto; Delaunay, Xavier; Thiebaut, Carole

    2012-10-01

    The huge improvements in resolution and dynamic range of current [1][2] and future CNES remote sensing missions (from 5m/2.5m in Spot5 to 70cm in Pleiades) illustrate the increasing need of efficient on-board image compressors. Many techniques have been considered by CNES during the last years in order to go beyond usual compression ratios: new image transforms or post-transforms [3][4], exceptional processing [5], selective compression [6]. However, even if significant improvements have been obtained, none of those techniques has ever contested an essential drawback in current on-board compression schemes: fixed-rate (or compression ratio). This classical assumption provides highly-predictable data volumes that simplify storage and transmission. But on the other hand, it demands to compress every image-segment (strip) of the scene within the same amount of data. Therefore, this fixed bit-rate is dimensioned on the worst case assessments to guarantee the quality requirements in all areas of the image. This is obviously not the most economical way of achieving the required image quality for every single segment. Thus, CNES has started a study to re-use existing compressors [7] in a Fixed-Quality/Variable bit-rate mode. The main idea is to compute a local complexity metric in order to assign the optimum bit-rate to comply with quality requirements. Consequently, complex areas are less compressed than simple ones, offering a better image quality for an equivalent global bit-rate. "Near-lossless bit-rate" of image segments has revealed as an efficient image complexity estimator. It links quality criteria and bit-rates through a single theoretical relationship. Compression parameters are thus automatically computed in accordance with the quality requirements. In addition, this complexity estimator could be implemented in a one-pass compression and truncation scheme.

  15. Perceived quality of wood images influenced by the skewness of image histogram

    NASA Astrophysics Data System (ADS)

    Katsura, Shigehito; Mizokami, Yoko; Yaguchi, Hirohisa

    2015-08-01

    The shape of image luminance histograms is related to material perception. We investigated how the luminance histogram contributed to improvements in the perceived quality of wood images by examining various natural wood and adhesive vinyl sheets with printed wood grain. In the first experiment, we visually evaluated the perceived quality of wood samples. In addition, we measured the colorimetric parameters of the wood samples and calculated statistics of image luminance. The relationship between visual evaluation scores and image statistics suggested that skewness and kurtosis affected the perceived quality of wood. In the second experiment, we evaluated the perceived quality of wood images with altered luminance skewness and kurtosis using a paired comparison method. Our result suggests that wood images are more realistic if the skewness of the luminance histogram is slightly negative.

  16. A feature-enriched completely blind image quality evaluator.

    PubMed

    Lin Zhang; Lei Zhang; Bovik, Alan C

    2015-08-01

    Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm.

  17. A feature-enriched completely blind image quality evaluator.

    PubMed

    Lin Zhang; Lei Zhang; Bovik, Alan C

    2015-08-01

    Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. They learn regression models from training images with associated human subjective scores to predict the perceptual quality of test images. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than the opinion-aware methods. Here, we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform, the existing opinion-aware methods. By integrating the features of natural image statistics derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, and then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIQA methods. The MATLAB source code of our algorithm is publicly available at www.comp.polyu.edu.hk/~cslzhang/IQA/ILNIQE/ILNIQE.htm. PMID:25915960

  18. Impact of contact lens zone geometry and ocular optics on bifocal retinal image quality

    PubMed Central

    Bradley, Arthur; Nam, Jayoung; Xu, Renfeng; Harman, Leslie; Thibos, Larry

    2014-01-01

    Purpose To examine the separate and combined influences of zone geometry, pupil size, diffraction, apodisation and spherical aberration on the optical performance of concentric zonal bifocals. Methods Zonal bifocal pupil functions representing eye + ophthalmic correction were defined by interleaving wavefronts from separate optical zones of the bifocal. A two-zone design (a central circular inner zone surrounded by an annular outer-zone which is bounded by the pupil) and a five-zone design (a central small circular zone surrounded by four concentric annuli) were configured with programmable zone geometry, wavefront phase and pupil transmission characteristics. Using computational methods, we examined the effects of diffraction, Stiles Crawford apodisation, pupil size and spherical aberration on optical transfer functions for different target distances. Results Apodisation alters the relative weighting of each zone, and thus the balance of near and distance optical quality. When spherical aberration is included, the effective distance correction, add power and image quality depend on zone-geometry and Stiles Crawford Effect apodisation. When the outer zone width is narrow, diffraction limits the available image contrast when focused, but as pupil dilates and outer zone width increases, aberrations will limit the best achievable image quality. With two-zone designs, balancing near and distance image quality is not achieved with equal area inner and outer zones. With significant levels of spherical aberration, multi-zone designs effectively become multifocals. Conclusion Wave optics and pupil varying ocular optics significantly affect the imaging capabilities of different optical zones of concentric bifocals. With two-zone bifocal designs, diffraction, pupil apodisation spherical aberration, and zone size influence both the effective add power and the pupil size required to balance near and distance image quality. Five-zone bifocal designs achieve a high degree of

  19. What Is Quality Education? How Can It Be Achieved? The Perspectives of School Middle Leaders in Singapore

    ERIC Educational Resources Information Center

    Ng, Pak Tee

    2015-01-01

    This paper presents the findings of a research project that examines how middle leaders in Singapore schools understand "quality education" and how they think quality education can be achieved. From the perspective of these middle leaders, quality education emphasises holistic development, equips students with the knowledge and skills…

  20. Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc

    2015-03-01

    The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.

  1. Simulation of the imaging quality of ground-based telescopes affected by atmospheric disturbances

    NASA Astrophysics Data System (ADS)

    Ren, Yubin; Kou, Songfeng; Gu, Bozhong

    2014-08-01

    Ground-based telescope imaging model is developed in this paper, the relationship between the atmospheric disturbances and the ground-based telescope image quality is studied. Simulation of the wave-front distortions caused by atmospheric turbulences has long been an important method in the study of the propagation of light through the atmosphere. The phase of the starlight wave-front is changed over time, but in an appropriate short exposure time, the atmospheric disturbances can be considered as "frozen". In accordance with Kolmogorov turbulence theory, simulating atmospheric disturbances of image model based on the phase screen distorted by atmospheric turbulences is achieved by the fast Fourier transform (FFT). Geiger mode avalanche photodiode array (APD arrays) model is used for atmospheric wave-front detection, the image is achieved by inversion method of photon counting after the target starlight goes through phase screens and ground-based telescopes. Ground-based telescope imaging model is established in this paper can accurately achieve the relationship between the quality of telescope imaging and monolayer or multilayer atmosphere disturbances, and it is great significance for the wave-front detection and optical correction in a Multi-conjugate Adaptive Optics system (MCAO).

  2. Pre-analytic process control: projecting a quality image.

    PubMed

    Serafin, Mark D

    2006-01-01

    Within the health-care system, the term "ancillary department" often describes the laboratory. Thus, laboratories may find it difficult to define their image and with it, customer perception of department quality. Regulatory requirements give laboratories who so desire an elegant way to address image and perception issues--a comprehensive pre-analytic system solution. Since large laboratories use such systems--laboratory service manuals--I describe and illustrate the process for the benefit of smaller facilities. There exist resources to help even small laboratories produce a professional service manual--an elegant solution to image and customer perception of quality. PMID:17005095

  3. The effect of image quality and forensic expertise in facial image comparisons.

    PubMed

    Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice

    2015-03-01

    Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness. PMID:25537273

  4. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  5. Optimizing CT radiation dose based on patient size and image quality: the size-specific dose estimate method.

    PubMed

    Larson, David B

    2014-10-01

    The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach.

  6. APQ-102 imaging radar digital image quality study

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1982-01-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  7. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  8. Thematic Mapper image quality: Preliminary results

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C.; Card, D. H.; Hlavka, C. A.; Likens, W. C.; Mertz, F. C.; Hall, J. R.

    1983-01-01

    Based on images analyzed so far, the band to band registration accuracy of the thematic mapper is very good. For bands within the same focal plane, the mean misregistrations are well within the specification, 0.2 pixels. For bands between the cooled and uncooled focal planes, there is a consistent mean misregistration of 0.5 pixels along-scan and 0.2-0.3 pixels across-scan. It exceeds the permitted 0.3 pixels for registration of bands between focal planes. If the mean misregistrations were removed by the data processing software, an analysis of the standard deviation of the misregistration indicates all band combinations would meet the registration specifications except for those including the thermal band. Analysis of the periodic noise in one image indicates a noise component in band 1 with a spatial frequency equivalent to 3.2 pixels in the along-scan direction.

  9. Validation of no-reference image quality index for the assessment of digital mammographic images

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  10. Characterization of the image quality in neutron radioscopy

    NASA Astrophysics Data System (ADS)

    Brunner, J.; Engelhardt, M.; Frei, G.; Gildemeister, A.; Lehmann, E.; Hillenbach, A.; Schillinger, B.

    2005-04-01

    Neutron radioscopy, or dynamic neutron radiography, is a non-destructive testing method, which has made big steps in the last years. Depending on the neutron flux, the object and the detector, for single events a time resolution down to a few milliseconds is possible. In the case of repetitive processes the object can be synchronized with the detector and better statistics in the image can be reached by adding radiographies of the same phase with a time resolution down to 100 μs. By stepwise delaying the trigger signal a radiography movie can be composed. Radiography images of a combustion engine and an injection nozzle were evaluated quantitatively by different methods trying to characterize the image quality of an imaging system. The main factors which influence the image quality are listed and discussed.

  11. Method for image quality monitoring on digital television networks

    NASA Astrophysics Data System (ADS)

    Bretillon, Pierre; Baina, Jamal; Jourlin, Michel; Goudezeune, Gabriel

    1999-11-01

    This paper presents a method designed to monitor image quality. The emphasis is given here to the monitoring in digital television broadcasting networks, in order for the providers to ensure a 'user-oriented' Quality of Service. Most objective image quality assessment methods are technically very difficult to apply in this context because of bandwidth limitations. We propose a parametric, reduced reference method that relies on the evaluation of characteristic coding and transmission impairments with a set of features. We show that quality can be predicted with a satisfying correlation to a subjective evaluation by the combination of several impairment features in an appropriate model. The method has been implemented and tested in a range of situations on simulated and real DVB networks. This allows to conclude on the usefulness of the approach and our future developments for quality of service monitoring in digital television.

  12. Design of a practical model-observer-based image quality assessment method for x-ray computed tomography imaging systems.

    PubMed

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A

    2016-07-01

    The use of a channelization mechanism on model observers not only makes mimicking human visual behavior possible, but also reduces the amount of image data needed to estimate the model observer parameters. The channelized Hotelling observer (CHO) and channelized scanning linear observer (CSLO) have recently been used to assess CT image quality for detection tasks and combined detection/estimation tasks, respectively. Although the use of channels substantially reduces the amount of data required to compute image quality, the number of scans required for CT imaging is still not practical for routine use. It is our desire to further reduce the number of scans required to make CHO or CSLO an image quality tool for routine and frequent system validations and evaluations. This work explores different data-reduction schemes and designs an approach that requires only a few CT scans. Three different kinds of approaches are included in this study: a conventional CHO/CSLO technique with a large sample size, a conventional CHO/CSLO technique with fewer samples, and an approach that we will show requires fewer samples to mimic conventional performance with a large sample size. The mean value and standard deviation of areas under ROC/EROC curve were estimated using the well-validated shuffle approach. The results indicate that an 80% data reduction can be achieved without loss of accuracy. This substantial data reduction is a step toward a practical tool for routine-task-based QA/QC CT system assessment.

  13. Toward the development of an image quality tool for active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Weatherall, James C.; Greca, Joseph; Smith, Barry T.

    2015-05-01

    Preliminary design considerations for an image quality tool to complement millimeter wave imaging systems are presented. The tool is planned for use in confirming operating parameters; confirmation of continuity for imaging component design changes, and analysis of new components and detection algorithms. Potential embodiments of an image quality tool may contain materials that mimic human skin in order to provide a realistic signal return for testing, which may also help reduce or eliminate the need for mock passengers for developmental testing. Two candidate materials, a dielectric liquid and an iron-loaded epoxy, have been identified and reflection measurements have been performed using laboratory systems in the range 18 - 40 GHz. Results show good agreement with both laboratory and literature data on human skin, particularly in the range of operation of two commercially available millimeter wave imaging systems. Issues related to the practical use of liquids and magnetic materials for image quality tools are discussed.

  14. Body image and quality of life in a Spanish population

    PubMed Central

    Lobera, Ignacio Jáuregui; Ríos, Patricia Bolaños

    2011-01-01

    Purpose The aim of the current study was to analyze the psychometric properties, factor structure, and internal consistency of the Spanish version of the Body Image Quality of Life Inventory (BIQLI-SP) as well as its test–retest reliability. Further objectives were to analyze different relationships with key dimensions of psychosocial functioning (ie, self-esteem, presence of psychopathological symptoms, eating and body image-related problems, and perceived stress) and to evaluate differences in body image quality of life due to gender. Patients and methods The sample comprised 417 students without any psychiatric history, recruited from the Pablo de Olavide University and the University of Seville. There were 140 men (33.57%) and 277 women (66.43%), and the mean age was 21.62 years (standard deviation = 5.12). After obtaining informed consent from all participants, the following questionnaires were administered: BIQLI, Eating Disorder Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results The BIQLI-SP shows adequate psychometric properties, and it may be useful to determine the body image quality of life in different physical conditions. A more positive body image quality of life is associated with better self-esteem, better psychological wellbeing, and fewer eating-related dysfunctional attitudes, this being more evident among women. Conclusion The BIQLI-SP may be useful to determine the body image quality of life in different contexts with regard to dermatology, cosmetic and reconstructive surgery, and endocrinology, among others. In these fields of study, a new trend has emerged to assess body image-related quality of life. PMID:21403794

  15. Analysis of image quality for laser display scanner test

    NASA Astrophysics Data System (ADS)

    Specht, H.; Kurth, S.; Billep, D.; Gessner, T.

    2009-02-01

    The scanning laser display technology is one of the most promising technologies for highly integrated projection display applications (e. g. in PDAs, mobile phones or head mounted displays) due to its advantages regarding image quality, miniaturization level and low cost potential. As a couple of research teams found during their investigations on laser scanning projections systems, the image quality of such systems is - beside from laser source and video signal processing - crucially determined by the scan engine, including MEMS scanner, driving electronics, scanning regime and synchronization. Even though a number of technical parameters can be measured with high accuracy, the test procedure is challenging because the influence of these parameters on image quality is often insufficiently understood. Thus, in many cases it is not clear how to define limiting values for characteristic parameters. In this paper the relationship between parameters characterizing the scan engine and their influence on image quality will be discussed. Those include scanner topography, geometry of the path of light as well as trajectory parameters. Understanding this enables a new methodology for testing and characterization of the scan engine, based on evaluation of one or a series of projected test images. Due to the fact that the evaluation process can be easily automated by digital image processing this methodology has the potential to become integrated into the production process of laser displays.

  16. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  17. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  18. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    SciTech Connect

    Christianson, O; Winslow, J; Samei, E

    2014-06-15

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using optical character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image

  19. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  20. Real-time imaging systems' combination of methods to achieve automatic target recognition

    NASA Astrophysics Data System (ADS)

    Maraviglia, Carlos G.; Williams, Elmer F.; Pezzulich, Alan Z.

    1998-03-01

    Using a combination of strategies real time imaging weapons systems are achieving their goals of detecting their intended targets. The demands of acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromise be made as to having a truly automatic system. A combination of techniques such as dedicated image processing hardware, real time operating systems, mixes of algorithmic methods, and multi-sensor detectors are a forbearance of the unleashed potential of future weapons system and their incorporation in truly autonomous target acquisition. Elements such as position information, sensor gain controls, way marks for mid course correction, and augmentation with different imaging spectrums as well as future capabilities such as neural net expert systems and decision processors over seeing a fusion matrix architecture may be considered tools for a weapon system's achievement of its ultimate goal. Currently, acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromises be made as to having a truly automatic system. It is now necessary to include a human in the track decision loop, a system feature that may be long lived. Automatic Track Recognition will still be the desired goal in future systems due to the variability of military missions and desirability of an expendable asset. Furthermore, with the increasing incorporation of multi-sensor information into the track decision the human element's real time contribution must be carefully engineered.

  1. The influence of noise on image quality in phase-diverse coherent diffraction imaging

    NASA Astrophysics Data System (ADS)

    Wittler, H. P. A.; van Riessen, G. A.; Jones, M. W. M.

    2016-02-01

    Phase-diverse coherent diffraction imaging provides a route to high sensitivity and resolution with low radiation dose. To take full advantage of this, the characteristics and tolerable limits of measurement noise for high quality images must be understood. In this work we show the artefacts that manifest in images recovered from simulated data with noise of various characteristics in the illumination and diffraction pattern. We explore the limits at which images of acceptable quality can be obtained and suggest qualitative guidelines that would allow for faster data acquisition and minimize radiation dose.

  2. Charting the course for home health care quality: action steps for achieving sustainable improvement: conference proceedings.

    PubMed

    Feldman, Penny Hollander; Peterson, Laura E; Reische, Laurie; Bruno, Lori; Clark, Amy

    2004-12-01

    On June 30 and July 1, 2003, the first national meeting Charting the Course for Home Health Care Quality: Action Steps for Achieving Sustainable Improvement convened in New York City. The Center for Home Care Policy & Research of the Visiting Nurse Service of New York (VNSNY) hosted the meeting with support from the Robert Wood Johnson Foundation. Fifty-seven attendees from throughout the United States participated. The participants included senior leaders and managers and nurses working directly in home care today. The meeting's objectives were to: 1. foster dialogue among key constituents influencing patient safety and home care, 2. promote information-sharing across sectors and identify areas where more information is needed, and, 3. develop an agenda and strategy for moving forward. This article reports the meeting's proceedings.

  3. Achieving sustainability, quality and access: lessons from the world's largest revolving drug fund in Khartoum.

    PubMed

    Witter, S

    2007-01-01

    Ensuring a reliable and affordable supply of essential drugs to health facilities is one of the main challenges facing developing countries. This paper describes the revolving drug fund in Khartoum, which was set up in 1989 to improve access to high quality drugs across the State. An evaluation in 2004 showed that the fund has successfully managed a number of threats to its financial sustainability and has expanded its network of facilities, its range of products and its financial assets. It now supplies essential drugs to 3 million out of the 5 million population of Khartoum each year, at prices between 40% and 100% less than alternative sources. However, results illustrated the tension between achieving an efficient cost-recovery system and access for the poorest.

  4. Exploratory survey of image quality on CR digital mammography imaging systems in Mexico.

    PubMed

    Gaona, E; Rivera, T; Arreola, M; Franco, J; Molina, N; Alvarez, B; Azorín, C G; Casian, G

    2014-01-01

    The purpose of this study was to assess the current status of image quality and dose in computed radiographic digital mammography (CRDM) systems. Studies included CRDM systems of various models and manufacturers which dose and image quality comparisons were performed. Due to the recent rise in the use of digital radiographic systems in Mexico, CRDM systems are rapidly replacing conventional film-screen systems without any regard to quality control or image quality standards. Study was conducted in 65 mammography facilities which use CRDM systems in the Mexico City and surrounding States. The systems were tested as used clinically. This means that the dose and beam qualities were selected using the automatic beam selection and photo-timed features. All systems surveyed generate laser film hardcopies for the radiologist to read on a scope or mammographic high luminance light box. It was found that 51 of CRDM systems presented a variety of image artefacts and non-uniformities arising from inadequate acquisition and processing, as well as from the laser printer itself. Undisciplined alteration of image processing settings by the technologist was found to be a serious prevalent problem in 42 facilities. Only four of them showed an image QC program which is periodically monitored by a medical physicist. The Average Glandular Dose (AGD) in the surveyed systems was estimated to have a mean value of 2.4 mGy. To improve image quality in mammography and make more efficient screening mammographic in early detection of breast cancer is required new legislation.

  5. Exploratory survey of image quality on CR digital mammography imaging systems in Mexico.

    PubMed

    Gaona, E; Rivera, T; Arreola, M; Franco, J; Molina, N; Alvarez, B; Azorín, C G; Casian, G

    2014-01-01

    The purpose of this study was to assess the current status of image quality and dose in computed radiographic digital mammography (CRDM) systems. Studies included CRDM systems of various models and manufacturers which dose and image quality comparisons were performed. Due to the recent rise in the use of digital radiographic systems in Mexico, CRDM systems are rapidly replacing conventional film-screen systems without any regard to quality control or image quality standards. Study was conducted in 65 mammography facilities which use CRDM systems in the Mexico City and surrounding States. The systems were tested as used clinically. This means that the dose and beam qualities were selected using the automatic beam selection and photo-timed features. All systems surveyed generate laser film hardcopies for the radiologist to read on a scope or mammographic high luminance light box. It was found that 51 of CRDM systems presented a variety of image artefacts and non-uniformities arising from inadequate acquisition and processing, as well as from the laser printer itself. Undisciplined alteration of image processing settings by the technologist was found to be a serious prevalent problem in 42 facilities. Only four of them showed an image QC program which is periodically monitored by a medical physicist. The Average Glandular Dose (AGD) in the surveyed systems was estimated to have a mean value of 2.4 mGy. To improve image quality in mammography and make more efficient screening mammographic in early detection of breast cancer is required new legislation. PMID:23938078

  6. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  7. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  8. Effects of sparse sampling schemes on image quality in low-dose CT

    SciTech Connect

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-11-15

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  9. Nanoscopy—imaging life at the nanoscale: a Nobel Prize achievement with a bright future

    NASA Astrophysics Data System (ADS)

    Blom, Hans; Bates, Mark

    2015-10-01

    A grand scientific prize was awarded last year to three pioneering scientists, for their discovery and development of molecular ‘ON-OFF’ switching which, when combined with optical imaging, can be used to see the previously invisible with light microscopy. The Royal Swedish Academy of Science announced on October 8th their decision and explained that this achievement—rooted in physics and applied in biology and medicine—was awarded with the Nobel Prize in Chemistry for controlling fluorescent molecules to create images of specimens smaller than anything previously observed with light. The story of how this noble switch in optical microscopy was achieved and how it was engineered to visualize life at the nanoscale is highlighted in this invited comment.

  10. Radiation dose and image quality for paediatric interventional cardiology

    NASA Astrophysics Data System (ADS)

    Vano, E.; Ubeda, C.; Leyton, F.; Miranda, P.

    2008-08-01

    Radiation dose and image quality for paediatric protocols in a biplane x-ray system used for interventional cardiology have been evaluated. Entrance surface air kerma (ESAK) and image quality using a test object and polymethyl methacrylate (PMMA) phantoms have been measured for the typical paediatric patient thicknesses (4-20 cm of PMMA). Images from fluoroscopy (low, medium and high) and cine modes have been archived in digital imaging and communications in medicine (DICOM) format. Signal-to-noise ratio (SNR), figure of merit (FOM), contrast (CO), contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) have been computed from the images. Data on dose transferred to the DICOM header have been used to test the values of the dosimetric display at the interventional reference point. ESAK for fluoroscopy modes ranges from 0.15 to 36.60 µGy/frame when moving from 4 to 20 cm PMMA. For cine, these values range from 2.80 to 161.10 µGy/frame. SNR, FOM, CO, CNR and HCSR are improved for high fluoroscopy and cine modes and maintained roughly constant for the different thicknesses. Cumulative dose at the interventional reference point resulted 25-45% higher than the skin dose for the vertical C-arm (depending of the phantom thickness). ESAK and numerical image quality parameters allow the verification of the proper setting of the x-ray system. Knowing the increases in dose per frame when increasing phantom thicknesses together with the image quality parameters will help cardiologists in the good management of patient dose and allow them to select the best imaging acquisition mode during clinical procedures.

  11. Determination of pasture quality using airborne hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Pullanagari, R. R.; Kereszturi, G.; Yule, Ian J.; Irwin, M. E.

    2015-10-01

    Pasture quality is a critical determinant which influences animal performance (live weight gain, milk and meat production) and animal health. Assessment of pasture quality is therefore required to assist farmers with grazing planning and management, benchmarking between seasons and years. Traditionally, pasture quality is determined by field sampling which is laborious, expensive and time consuming, and the information is not available in real-time. Hyperspectral remote sensing has potential to accurately quantify biochemical composition of pasture over wide areas in great spatial detail. In this study an airborne imaging spectrometer (AisaFENIX, Specim) was used with a spectral range of 380-2500 nm with 448 spectral bands. A case study of a 600 ha hill country farm in New Zealand is used to illustrate the use of the system. Radiometric and atmospheric corrections, along with automatized georectification of the imagery using Digital Elevation Model (DEM), were applied to the raw images to convert into geocoded reflectance images. Then a multivariate statistical method, partial least squares (PLS), was applied to estimate pasture quality such as crude protein (CP) and metabolisable energy (ME) from canopy reflectance. The results from this study revealed that estimates of CP and ME had a R2 of 0.77 and 0.79, and RMSECV of 2.97 and 0.81 respectively. By utilizing these regression models, spatial maps were created over the imaged area. These pasture quality maps can be used for adopting precision agriculture practices which improves farm profitability and environmental sustainability.

  12. Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics

    NASA Astrophysics Data System (ADS)

    King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.

    2011-03-01

    The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.

  13. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  14. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  15. Compressed image quality metric based on perceptually weighted distortion.

    PubMed

    Hu, Sudeng; Jin, Lina; Wang, Hanli; Zhang, Yun; Kwong, Sam; Kuo, C-C Jay

    2015-12-01

    Objective quality assessment for compressed images is critical to various image compression systems that are essential in image delivery and storage. Although the mean squared error (MSE) is computationally simple, it may not be accurate to reflect the perceptual quality of compressed images, which is also affected dramatically by the characteristics of human visual system (HVS), such as masking effect. In this paper, an image quality metric (IQM) is proposed based on perceptually weighted distortion in terms of the MSE. To capture the characteristics of HVS, a randomness map is proposed to measure the masking effect and a preprocessing scheme is proposed to simulate the processing that occurs in the initial part of HVS. Since the masking effect highly depends on the structural randomness, the prediction error from neighborhood with a statistical model is used to measure the significance of masking. Meanwhile, the imperceptible signal with high frequency could be removed by preprocessing with low-pass filters. The relation is investigated between the distortions before and after masking effect, and a masking modulation model is proposed to simulate the masking effect after preprocessing. The performance of the proposed IQM is validated on six image databases with various compression distortions. The experimental results show that the proposed algorithm outperforms other benchmark IQMs. PMID:26415170

  16. Flattening filter removal for improved image quality of megavoltage fluoroscopy

    SciTech Connect

    Christensen, James D.; Kirichenko, Alexander; Gayou, Olivier

    2013-08-15

    Purpose: Removal of the linear accelerator (linac) flattening filter enables a high rate of dose deposition with reduced treatment time. When used for megavoltage imaging, an unflat beam has reduced primary beam scatter resulting in sharper images. In fluoroscopic imaging mode, the unflat beam has higher photon count per image frame yielding higher contrast-to-noise ratio. The authors’ goal was to quantify the effects of an unflat beam on the image quality of megavoltage portal and fluoroscopic images.Methods: 6 MV projection images were acquired in fluoroscopic and portal modes using an electronic flat-panel imager. The effects of the flattening filter on the relative modulation transfer function (MTF) and contrast-to-noise ratio were quantified using the QC3 phantom. The impact of FF removal on the contrast-to-noise ratio of gold fiducial markers also was studied under various scatter conditions.Results: The unflat beam had improved contrast resolution, up to 40% increase in MTF contrast at the highest frequency measured (0.75 line pairs/mm). The contrast-to-noise ratio was increased as expected from the increased photon flux. The visualization of fiducial markers was markedly better using the unflat beam under all scatter conditions, enabling visualization of thin gold fiducial markers, the thinnest of which was not visible using the unflat beam.Conclusions: The removal of the flattening filter from a clinical linac leads to quantifiable improvements in the image quality of megavoltage projection images. These gains enable observers to more easily visualize thin fiducial markers and track their motion on fluoroscopic images.

  17. Teacher Quality and Educational Equality: Do Teachers with Higher Standards-Based Evaluation Ratings Close Student Achievement Gaps?

    ERIC Educational Resources Information Center

    Borman, Geoffrey D.; Kimball, Steven M.

    2005-01-01

    Using standards-based evaluation ratings for nearly 400 teachers, and achievement results for over 7,000 students from grades 4-6, this study investigated the distribution and achievement effects of teacher quality in Washoe County, a mid-sized school district serving Reno and Sparks, Nevada. Classrooms with higher concentrations of minority,…

  18. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  19. Body image quality of life in eating disorders

    PubMed Central

    Jáuregui Lobera, Ignacio; Bolaños Ríos, Patricia

    2011-01-01

    Purpose: The objective was to examine how body image affects quality of life in an eating-disorder (ED) clinical sample, a non-ED clinical sample, and a nonclinical sample. We hypothesized that ED patients would show the worst body image quality of life. We also hypothesized that body image quality of life would have a stronger negative association with specific ED-related variables than with other psychological and psychopathological variables, mainly among ED patients. On the basis of previous studies, the influence of gender on the results was explored, too. Patients and methods: The final sample comprised 70 ED patients (mean age 22.65 ± 7.76 years; 59 women and 11 men); 106 were patients with other psychiatric disorders (mean age 28.20 ± 6.52; 67 women and 39 men), and 135 were university students (mean age 21.57 ± 2.58; 81 women and 54 men), with no psychiatric history. After having obtained informed consent, the following questionnaires were administered: Body Image Quality of Life Inventory-Spanish version (BIQLI-SP), Eating Disorders Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results: The ED patients’ ratings on the BIQLI-SP were the lowest and negatively scored (BIQLI-SP means: +20.18, +5.14, and −6.18, in the student group, the non-ED patient group, and the ED group, respectively). The effect of body image on quality of life was more negative in the ED group in all items of the BIQLI-SP. Body image quality of life was negatively associated with specific ED-related variables, more than with other psychological and psychopathological variables, but not especially among ED patients. Conclusion: Body image quality of life was affected not only by specific pathologies related to body image disturbances, but also by other psychopathological syndromes. Nevertheless, the greatest effect was related to ED, and seemed to be more negative among men. This finding is the

  20. Automating PACS quality control with the Vanderbilt image processing enterprise resource

    NASA Astrophysics Data System (ADS)

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2012-02-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  1. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  2. Assessing the quality of rainfall data when aiming to achieve flood resilience

    NASA Astrophysics Data System (ADS)

    Hoang, C. T.; Tchiguirinskaia, I.; Schertzer, D.; Lovejoy, S.

    2012-04-01

    A new EU Floods Directive entered into force five years ago. This Directive requires Member States to coordinate adequate measures to reduce flood risk. European flood management systems require reliable rainfall statistics, e.g. the Intensity-duration-Frequency curves for shorter and shorter durations and for a larger and larger range of return periods. Preliminary studies showed that the number of floods was lower when using low time resolution data of high intensity rainfall events, compared to estimates obtained with the help of higher time resolution data. These facts suggest that a particular attention should be paid to the rainfall data quality in order to adequately investigate flood risk aiming to achieve flood resilience. The potential consequences of changes in measuring and recording techniques have been somewhat discussed in the literature with respect to a possible introduction of artificial inhomogeneities in time series. In this paper, we discuss how to detect another artificiality: most of the rainfall time series have a lower recording frequency than that is assumed, furthermore the effective high-frequency limit often depends on the recording year due to algorithm changes. This question is particularly important for operational hydrology, because an error on the effective recording high frequency introduces biases in the corresponding statistics. In this direction, we developed a first version of a SERQUAL procedure to automatically detect the effective time resolution of highly mixed data. Being applied to the 166 rainfall time series in France, the SERQUAL procedure has detected that most of them have an effective hourly resolution, rather than a 5 minutes resolution. Furthermore, series having an overall 5 minute resolution do not have it for all years. These results raise serious concerns on how to benchmark stochastic rainfall models at a sub-hourly resolution, which are particularly desirable for operational hydrology. Therefore, database

  3. Image-rotating cavity designs for improved beam quality in nanosecond optical parametric oscillators

    SciTech Connect

    Smith, Arlee V.; Bowers, Mark S.

    2001-05-01

    We show by computer simulation that high beam quality can be achieved in high-energy, nanosecond optical parametric oscillators by use of image-rotating resonators. Lateral walk-off between the signal and the idler beams in a nonlinear crystal creates correlations across the beams in the walk off direction, or equivalently, creates a restricted acceptance angle. These correlations can improve the beam quality in the walk-off plane. We show that image rotation or reflection can be used to improve beam quality in both planes. The lateral walk-off can be due to birefringent walk-off in type II mixing or to noncollinear mixing in type I or type II mixing.

  4. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  5. Why Is Quality in Higher Education Not Achieved? The View of Academics

    ERIC Educational Resources Information Center

    Cardoso, Sónia; Rosa, Maria J.; Stensaker, Bjørn

    2016-01-01

    Quality assurance is currently an established activity in Europe, driven either by national quality assurance agencies or by institutions themselves. However, whether quality assurance is perceived as actually being capable of promoting quality is still a question open to discussion. Based on three different views on quality derived from the…

  6. Magnetic Resonance Imaging (MRI) Analysis of Fibroid Location in Women Achieving Pregnancy After Uterine Artery Embolization

    SciTech Connect

    Walker, Woodruff J.; Bratby, Mark John

    2007-09-15

    The purpose of this study was to evaluate the fibroid morphology in a cohort of women achieving pregnancy following treatment with uterine artery embolization (UAE) for symptomatic uterine fibroids. A retrospective review of magnetic resonance imaging (MRI) of the uterus was performed to assess pre-embolization fibroid morphology. Data were collected on fibroid size, type, and number and included analysis of follow-up imaging to assess response. There have been 67 pregnancies in 51 women, with 40 live births. Intramural fibroids were seen in 62.7% of the women (32/48). Of these the fibroids were multiple in 16. A further 12 women had submucosal fibroids, with equal numbers of types 1 and 2. Two of these women had coexistent intramural fibroids. In six women the fibroids could not be individually delineated and formed a complex mass. All subtypes of fibroid were represented in those subgroups of women achieving a live birth versus those who did not. These results demonstrate that the location of uterine fibroids did not adversely affect subsequent pregnancy in the patient population investigated. Although this is only a small qualitative study, it does suggest that all types of fibroids treated with UAE have the potential for future fertility.

  7. Quality assurance methodology and applications to abdominal imaging PQI.

    PubMed

    Paushter, David M; Thomas, Stephen

    2016-03-01

    Quality assurance has increasingly become an integral part of medicine, with tandem goals of increasing patient safety and procedural quality, improving efficiency, lowering cost, and ultimately improving patient outcomes. This article reviews quality assurance methodology, ranging from the PDSA cycle to the application of lean techniques, aimed at operational efficiency, to continually evaluate and revise the health care environment. Alignment of goals for practices, hospitals, and healthcare organizations is critical, requiring clear objectives, adequate resources, and transparent reporting. In addition, there is a significant role played by regulatory bodies and oversight organizations in determining external benchmarks of quality, practice, and individual certification and reimbursement. Finally, practical application of quality principles to practice improvement projects in abdominal imaging will be presented.

  8. Mammography in New Zealand: radiation dose and image quality.

    PubMed

    Poletti, J L; Williamson, B D; Mitchell, A W

    1991-06-01

    The mean glandular doses to the breast, image quality and machine performance have been determined for all mammographic x-ray facilities in New Zealand, during 1988-89. For 30 mm and 45 mm phantoms the mean doses per film were 1.03 +/- 0.56 mGy and 1.97 +/- 1.06 mGy. These doses are within international guide-lines. Image quality (detection of simulated microcalcifications, and contrast-detail performance) was found to depend on focal spot size/FFD combination, breast thickness, and film processing. The best machines could resolve 0.2 mm aluminium oxide specks with the contact technique. The use of a grid improved image quality as did magnification. Extended cycle film processing reduced doses, but the claimed improvement in image quality was not apparent from our data. The machine calibration parameters kVp, HVL and timer accuracy were in general within accepted tolerances. Automatic exposure controls in some cases gave poor control of film density with changing breast thickness. PMID:1747087

  9. SCID: full reference spatial color image quality metric

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Herbin, M.; Zagrouba, E.

    2009-01-01

    The most used full reference image quality assessments are error-based methods. Thus, these measures are performed by pixel based difference metrics like Delta E ( E), MSE, PSNR, etc. Therefore, a local fidelity of the color is defined. However, these metrics does not correlate well with the perceived image quality. Indeed, they omit the properties of the HVS. Thus, they cannot be a reliable predictor of the perceived visual quality. All this metrics compute the differences pixel to pixel. Therefore, a local fidelity of the color is defined. However, the human visual system is rather sensitive to a global quality. In this paper, we present a novel full reference color metric that is based on characteristics of the human visual system by considering the notion of adjacency. This metric called SCID for Spatial Color Image Difference, is more perceptually correlated than other color differences such as Delta E. The suggested full reference metric is generic and independent of image distortion type. It can be used in different application such as: compression, restoration, etc.

  10. Image quality, space-qualified UV interference filters

    NASA Technical Reports Server (NTRS)

    Mooney, Thomas A.

    1992-01-01

    The progress during the contract period is described. The project involved fabrication of image quality, space-qualified bandpass filters in the 200-350 nm spectral region. Ion-assisted deposition (IAD) was applied to produce stable, reasonably durable filter coatings on space compatible UV substrates. Thin film materials and UV transmitting substrates were tested for resistance to simulated space effects.

  11. Visual relevance of display image quality testing by photometric methods

    NASA Astrophysics Data System (ADS)

    Andren, Boerje; Breidne, Magnus; Hansson, L. A.; Persson, Bo

    1993-09-01

    The two major international test methods for evaluation of the image quality of video display terminals are the ISO 9241-3 international standard and the MPR test. In this paper we make an attempt to compare the visual relevance of these two test methods.

  12. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  13. Imaging through turbid media via sparse representation: imaging quality comparison of three projection matrices

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Li, Huijuan; Wu, Tengfei; Dai, Weijia; Bi, Xiangli

    2015-05-01

    The incident light will be scattered away due to the inhomogeneity of the refractive index in many materials which will greatly reduce the imaging depth and degrade the imaging quality. Many exciting methods have been presented in recent years for solving this problem and realizing imaging through a highly scattering medium, such as the wavefront modulation technique and reconstruction technique. The imaging method based on compressed sensing (CS) theory can decrease the computational complexity because it doesn't require the whole speckle pattern to realize reconstruction. One of the key premises of this method is that the object is sparse or can be sparse representation. However, choosing a proper projection matrix is very important to the imaging quality. In this paper, we analyzed that the transmission matrix (TM) of a scattering medium obeys circular Gaussian distribution, which makes it possible that a scattering medium can be used as the measurement matrix in the CS theory. In order to verify the performance of this method, a whole optical system is simulated. Various projection matrices are introduced to make the object sparse, including the fast Fourier transform (FFT) basis, the discrete cosine transform (DCT) basis and the discrete wavelet transform (DWT) basis, the imaging performances of each of which are compared comprehensively. Simulation results show that for most targets, applying the discrete wavelet transform basis will obtain an image in good quality. This work can be applied to biomedical imaging and used to develop real-time imaging through highly scattering media.

  14. An electron beam imaging system for quality assurance in IORT

    NASA Astrophysics Data System (ADS)

    Casali, F.; Rossi, M.; Morigi, M. P.; Brancaccio, R.; Paltrinieri, E.; Bettuzzi, M.; Romani, D.; Ciocca, M.; Tosi, G.; Ronsivalle, C.; Vignati, M.

    2004-01-01

    Intraoperative radiation therapy is a special radiotherapy technique, which enables a high dose of radiation to be given in a single fraction during oncological surgery. The major stumbling block to the large-scale application of the technique is the transfer of the patient, with an open wound, from the operating room to the radiation therapy bunker, with the consequent organisational problems and the increased risk of infection. To overcome these limitations, in the last few years a new kind of linear accelerator, the Novac 7, conceived for direct use in the surgical room, has become available. Novac 7 can deliver electron beams of different energies (3, 5, 7 and 9 MeV), with a high dose rate (up to 20 Gy/min). The aim of this work, funded by ENEA in the framework of a research contract, is the development of an innovative system for on-line measurements of 2D dose distributions and electron beam characterisation, before radiotherapy treatment with Novac 7. The system is made up of the following components: (a) an electron-light converter; (b) a 14 bit cooled CCD camera; (c) a personal computer with an ad hoc written software for image acquisition and processing. The performances of the prototype have been characterised experimentally with different electron-light converters. Several tests have concerned the assessment of the detector response as a function of impulse number and electron beam energy. Finally, the experimental results concerning beam profiles have been compared with data acquired with other dosimetric techniques. The achieved results make it possible to say that the developed system is suitable for fast quality assurance measurements and verification of 2D dose distributions.

  15. A study of image quality for radar image processing. [synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.

    1982-01-01

    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.

  16. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  17. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures.

    PubMed

    Oszust, Mariusz

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  18. Image quality specification and maintenance for airborne SAR

    NASA Astrophysics Data System (ADS)

    Clinard, Mark S.

    2004-08-01

    Specification, verification, and maintenance of image quality over the lifecycle of an operational airborne SAR begin with the specification for the system itself. Verification of image quality-oriented specification compliance can be enhanced by including a specification requirement that a vendor provide appropriate imagery at the various phases of the system life cycle. The nature and content of the imagery appropriate for each stage of the process depends on the nature of the test, the economics of collection, and the availability of techniques to extract the desired information from the data. At the earliest lifecycle stages, Concept and Technology Development (CTD) and System Development and Demonstration (SDD), the test set could include simulated imagery to demonstrate the mathematical and engineering concepts being implemented thus allowing demonstration of compliance, in part, through simulation. For Initial Operational Test and Evaluation (IOT&E), imagery collected from precisely instrumented test ranges and targets of opportunity consisting of a priori or a posteriori ground-truthed cultural and natural features are of value to the analysis of product quality compliance. Regular monitoring of image quality is possible using operational imagery and automated metrics; more precise measurements can be performed with imagery of instrumented scenes, when available. A survey of image quality measurement techniques is presented along with a discussion of the challenges of managing an airborne SAR program with the scarce resources of time, money, and ground-truthed data. Recommendations are provided that should allow an improvement in the product quality specification and maintenance process with a minimal increase in resource demands on the customer, the vendor, the operational personnel, and the asset itself.

  19. Evaluation of image quality of a new CCD-based system for chest imaging

    NASA Astrophysics Data System (ADS)

    Sund, Patrik; Kheddache, Susanne; Mansson, Lars G.; Bath, Magnus; Tylen, Ulf

    2000-04-01

    The Imix radiography system (Qy Imix Ab, Finland)consists of an intensifying screen, optics, and a CCD camera. An upgrade of this system (Imix 2000) with a red-emitting screen and new optics has recently been released. The image quality of Imix (original version), Imix 200, and two storage-phosphor systems, Fuji FCR 9501 and Agfa ADC70 was evaluated in physical terms (DQE) and with visual grading of the visibility of anatomical structures in clinical images (141 kV). PA chest images of 50 healthy volunteers were evaluated by experienced radiologists. All images were evaluated on Siemens Simomed monitors, using the European Quality Criteria. The maximum DQE values for Imix, Imix 2000, Agfa and Fuji were 11%, 14%, 17% and 19%, respectively (141kV, 5μGy). Using the visual grading, the observers rated the systems in the following descending order. Fuji, Imix 2000, Agfa, and Imix. Thus, the upgrade to Imix 2000 resulted in higher DQE values and a significant improvement in clinical image quality. The visual grading agrees reasonably well with the DQE results; however, Imix 2000 received a better score than what could be expected from the DQE measurements. Keywords: CCD Technique, Chest Imaging, Digital Radiography, DQE, Image Quality, Visual Grading Analysis

  20. Comprehensive quality assurance phantom for cardiovascular imaging systems

    NASA Astrophysics Data System (ADS)

    Lin, Pei-Jan P.

    1998-07-01

    With the advent of high heat loading capacity x-ray tubes, high frequency inverter type generators, and the use of spectral shaping filters, the automatic brightness/exposure control (ABC) circuit logic employed in the new generation of angiographic imaging equipment has been significantly reprogrammed. These new angiographic imaging systems are designed to take advantage of the power train capabilities to yield higher contrast images while maintaining, or lower, the patient exposure. Since the emphasis of the imaging system design has been significantly altered, the system performance parameters one is interested and the phantoms employed for the quality assurance must also change in order to properly evaluate the imaging capability of the cardiovascular imaging systems. A quality assurance (QA) phantom has been under development in this institution and was submitted to various interested organizations such as American Association of Physicists in Medicine (AAPM), Society for Cardiac Angiography & Interventions (SCA&I), and National Electrical Manufacturers Association (NEMA) for their review and input. At the same time, in an effort to establish a unified standard phantom design for the cardiac catheterization laboratories (CCL), SCA&I and NEMA have formed a joint work group in early 1997 to develop a suitable phantom. The initial QA phantom design has since been accepted to serve as the base phantom by the SCA&I- NEMA Joint Work Group (JWG) from which a comprehensive QA Phantom is being developed.

  1. Image-quality metrics for characterizing adaptive optics system performance.

    PubMed

    Brigantic, R T; Roggemann, M C; Bauer, K W; Welsh, B M

    1997-09-10

    Adaptive optics system (AOS) performance is a function of the system design, seeing conditions, and light level of the wave-front beacon. It is desirable to optimize the controllable parameters in an AOS to maximize some measure of performance. For this optimization to be useful, it is necessary that a set of image-quality metrics be developed that vary monotonically with the AOS performance under a wide variety of imaging environments. Accordingly, as conditions change, one can be confident that the computed metrics dictate appropriate system settings that will optimize performance. Three such candidate metrics are presented. The first is the Strehl ratio; the second is a novel metric that modifies the Strehl ratio by integration of the modulus of the average system optical transfer function to a noise-effective cutoff frequency at which some specified image spectrum signal-to-noise ratio level is attained; and the third is simply the cutoff frequency just mentioned. It is shown that all three metrics are correlated with the rms error (RMSE) between the measured image and the associated diffraction-limited image. Of these, the Strehl ratio and the modified Strehl ratio exhibit consistently high correlations with the RMSE across a broad range of conditions and system settings. Furthermore, under conditions that yield a constant average system optical transfer function, the modified Strehl ratio can still be used to delineate image quality, whereas the Strehl ratio cannot.

  2. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  3. Classroom quality as a predictor of first graders' time in non-instructional activities and literacy achievement.

    PubMed

    McLean, Leigh; Sparapani, Nicole; Toste, Jessica R; Connor, Carol McDonald

    2016-06-01

    This study investigated how quality of the classroom learning environment influenced first grade students' (n=533) time spent in two non-instructional classroom activities (off-task and in transition) and their subsequent literacy outcomes. Hierarchical linear modeling revealed that higher classroom quality was related to higher student performance in reading comprehension and expressive vocabulary. Further, classroom quality predicted the amount of time students spent off-task and in transitions in the classroom, with slopes of change across the year particularly impacted. Mediation effects were detected in the case of expressive vocabulary such that the influence of classroom quality on students' achievement operated through students' time spent in these non-instructional activities. Results highlight the importance of overall classroom quality to how students navigate the classroom environment during learning opportunities, with subsequent literacy achievement impacted. Implications for policy and educational practices are discussed.

  4. Classroom quality as a predictor of first graders' time in non-instructional activities and literacy achievement.

    PubMed

    McLean, Leigh; Sparapani, Nicole; Toste, Jessica R; Connor, Carol McDonald

    2016-06-01

    This study investigated how quality of the classroom learning environment influenced first grade students' (n=533) time spent in two non-instructional classroom activities (off-task and in transition) and their subsequent literacy outcomes. Hierarchical linear modeling revealed that higher classroom quality was related to higher student performance in reading comprehension and expressive vocabulary. Further, classroom quality predicted the amount of time students spent off-task and in transitions in the classroom, with slopes of change across the year particularly impacted. Mediation effects were detected in the case of expressive vocabulary such that the influence of classroom quality on students' achievement operated through students' time spent in these non-instructional activities. Results highlight the importance of overall classroom quality to how students navigate the classroom environment during learning opportunities, with subsequent literacy achievement impacted. Implications for policy and educational practices are discussed. PMID:27268569

  5. Improving Service Quality: Achieving High Performance in the Public and Private Sectors.

    ERIC Educational Resources Information Center

    Milakovich, Michael E.

    Quality-improvement principles are a sound means to respond to customer needs. However, when various quality and productivity theories and methods are applied, it is very difficult to consistently deliver quality results, especially in quasi-monopolistic, non-competitive, and regulated environments. This book focuses on quality-improvement methods…

  6. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching.

    PubMed

    Abdul Ghani, Ahmad Shahrizan; Mat Isa, Nor Ashidi

    2014-01-01

    The quality of underwater image is poor due to the properties of water and its impurities. The properties of water cause attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper proposes a method of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kries hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction.

  7. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching.

    PubMed

    Abdul Ghani, Ahmad Shahrizan; Mat Isa, Nor Ashidi

    2014-01-01

    The quality of underwater image is poor due to the properties of water and its impurities. The properties of water cause attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper proposes a method of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kries hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction. PMID:25674483

  8. Comparison of clinical and physical measures of image quality in chest and pelvis computed radiography at different tube voltages

    SciTech Connect

    Sandborg, Michael; Tingberg, Anders; Ullman, Gustaf; Dance, David R.; Alm Carlsson, Gudrun

    2006-11-15

    The aim of this work was to study the dependence of image quality in digital chest and pelvis radiography on tube voltage, and to explore correlations between clinical and physical measures of image quality. The effect on image quality of tube voltage in these two examinations was assessed using two methods. The first method relies on radiologists' observations of images of an anthropomorphic phantom, and the second method was based on computer modeling of the imaging system using an anthropomorphic voxel phantom. The tube voltage was varied within a broad range (50-150 kV), including those values typically used with screen-film radiography. The tube charge was altered so that the same effective dose was achieved for each projection. Two x-ray units were employed using a computed radiography (CR) image detector with standard tube filtration and antiscatter device. Clinical image quality was assessed by a group of radiologists using a visual grading analysis (VGA) technique based on the revised CEC image criteria. Physical image quality was derived from a Monte Carlo computer model in terms of the signal-to-noise ratio, SNR, of anatomical structures corresponding to the image criteria. Both the VGAS (visual grading analysis score) and SNR decrease with increasing tube voltage in both chest PA and pelvis AP examinations, indicating superior performance if lower tube voltages are employed. Hence, a positive correlation between clinical and physical measures of image quality was found. The pros and cons of using lower tube voltages with CR digital radiography than typically used in analog screen-film radiography are discussed, as well as the relevance of using VGAS and quantum-noise SNR as measures of image quality in pelvis and chest radiography.

  9. Effects of task and image properties on visual-attention deployment in image-quality assessment

    NASA Astrophysics Data System (ADS)

    Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid

    2015-03-01

    It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.

  10. Scanner-based image quality measurement system for automated analysis of EP output

    NASA Astrophysics Data System (ADS)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  11. Dosimetry and image quality in digital mammography facilities in the State of Minas Gerais, Brazil

    NASA Astrophysics Data System (ADS)

    da Silva, Sabrina Donato; Joana, Geórgia Santos; Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Leyton, Fernando; Nogueira, Maria do Socorro

    2015-11-01

    According to the National Register of Health Care Facilities (CNES), there are approximately 477 mammography systems operating in the state of Minas Gerais, Brazil, of which an estimated 200 are digital apparatus using mainly computerized radiography (CR) or direct radiography (DR) systems. Mammography is irreplaceable in the diagnosis and early detection of breast cancer, the leading cause of cancer death among women worldwide. A high standard of image quality alongside smaller doses and optimization of procedures are essential if early detection is to occur. This study aimed to determine dosimetry and image quality in 68 mammography services in Minas Gerais using CR or DR systems. The data of this study were collected between the years of 2011 and 2013. The contrast-to-noise ratio proved to be a critical point in the image production chain in digital systems, since 90% of services were not compliant in this regard, mainly for larger PMMA thicknesses (60 and 70 mm). Regarding the image noise, only 31% of these were compliant. The average glandular dose found is of concern, since more than half of the services presented doses above acceptable limits. Therefore, despite the potential benefits of using CR and DR systems, the employment of this technology has to be revised and optimized to achieve better quality image and reduce radiation dose as much as possible.

  12. Image quality and dose assessment in digital breast tomosynthesis: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Baptista, M.; Di Maria, S.; Oliveira, N.; Matela, N.; Janeiro, L.; Almeida, P.; Vaz, P.

    2014-11-01

    Mammography is considered a standard technique for the early detection of breast cancer. However, its sensitivity is limited essentially due to the issue of the overlapping breast tissue. This limitation can be partially overcome, with a relatively new technique, called digital breast tomosynthesis (DBT). For this technique, optimization of acquisition parameters which maximize image quality, whilst complying with the ALARA principle, continues to be an area of considerable research. The aim of this work was to study the best quantum energies that optimize the image quality with the lowest achievable dose in DBT and compare these results with the digital mammography (DM) ones. Monte Carlo simulations were performed using the state-of-the-art computer program MCNPX 2.7.0 in order to generate several 2D cranio-caudal (CC) projections obtained during an acquisition of a standard DBT examination. Moreover, glandular absorbed doses and photon flux calculations, for each projection image, were performed. A homogeneous breast computational phantom with 50%/50% glandular/adipose tissue composition was used and two compressed breast thicknesses were evaluated: 4 cm and 8 cm. The simulated projection images were afterwards reconstructed with an algebraic reconstruction tool and the signal difference to noise ratio (SDNR) was calculated in order to evaluate the image quality in DBT and DM. Finally, a thorough comparison between the results obtained in terms of SDNR and dose assessment in DBT and DM was performed.

  13. An improved Gabor enhancement method for low-quality fingerprint images

    NASA Astrophysics Data System (ADS)

    Geng, Hao; Li, Jicheng; Zhou, Jinwei; Chen, Dong

    2015-10-01

    The criminal's fingerprints often refer to those fingerprints that are extracted from crime scene and have played an important role in police' investigation and cracking the cases, but these fingerprints have features such as blur, incompleteness and low-contrast of ridges. Traditional fingerprint enhancement and identification methods have some limitations and the current automated fingerprint identification system (AFIS) hasn't not been applied extensively in police' investigation. Since the Gabor filter has drawbacks such as poor efficiency, low preciseness of the extracted ridge's orientation parameters, the enhancements of low-contrast fingerprint images can't achieve the desired effects. Therefore, an improved Gabor enhancement for low-quality fingerprint is proposed in this paper. Firstly, orientation image templates with different scales were used to distinguish the orientation images in the fingerprint area, and then orientation parameters of ridge were calculated. Secondly, mean frequencies of ridge were extracted based on local window of ridge's orientation and mean frequency parameters of ridges were calculated. Thirdly, the size and orientation of Gabor filter were self-adjusted according to local ridge's orientation and mean frequency. Finally, the poor-quality fingerprint images were enhanced. In the experiment, the improved Gabor filter has better performance for low-quality fingerprint images when compared with the traditional filtering methods.

  14. A three-dimensional statistical approach to improved image quality for multislice helical CT

    SciTech Connect

    Thibault, Jean-Baptiste; Sauer, Ken D.; Bouman, Charles A.; Hsieh, Jiang

    2007-11-15

    Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications.

  15. A three-dimensional statistical approach to improved image quality for multislice helical CT.

    PubMed

    Thibault, Jean-Baptiste; Sauer, Ken D; Bouman, Charles A; Hsieh, Jiang

    2007-11-01

    Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications.

  16. The impact of spectral filtration on image quality in micro-CT system.

    PubMed

    Ren, Liqiang; Ghani, Muhammad U; Wu, Di; Zheng, Bin; Chen, Yong; Yang, Kai; Wu, Xizeng; Liu, Hong

    2016-01-01

    This paper aims to evaluate the impact of spectral filtration on image quality in a microcomputed tomography (micro-CT) system. A mouse phantom comprising 11rods for modeling lung, muscle, adipose, and bones was scanned with 17 s and 2min, respectively. The current (μA) for each scan was adjusted to achieve identical entrance exposure to the phantom, providing a baseline for image quality evaluation. For each region of interest (ROI) within specific composition, CT number variations, noise levels, and contrast-to-noise ratios (CNRs) were evaluated from the reconstructed images. CT number variations and CNRs for bone with high density, muscle, and adipose were compared with theoretical predictions. The results show that the impact of spectral filtration on image quality indicators, such as CNR in a micro-CT system, is significantly associated with tissue characteristics. The findings may provide useful references for optimizing the scanning parameters of general micro-CT systems in future imaging applications. PMID:26894340

  17. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Rhodes, Laura A.; Davies, Andrew G.

    2015-03-01

    Dynamic X-ray imaging systems are used for interventional cardiac procedures to treat coronary heart disease. X-ray settings are controlled automatically by specially-designed X-ray dose control mechanisms whose role is to ensure an adequate level of image quality is maintained with an acceptable radiation dose to the patient. Current commonplace dose control designs quantify image quality by performing a simple technical measurement directly from the image. However, the utility of cardiac X-ray images is in their interpretation by a cardiologist during an interventional procedure, rather than in a technical measurement. With the long term goal of devising a clinically-relevant image quality metric for an intelligent dose control system, we aim to investigate the relationship of image noise with clinical professionals' perception of dynamic image sequences. Computer-generated noise was added, in incremental amounts, to angiograms of five different patients selected to represent the range of adult cardiac patient sizes. A two alternative forced choice staircase experiment was used to determine the amount of noise which can be added to a patient image sequences without changing image quality as perceived by clinical professionals. Twenty-five viewing sessions (five for each patient) were completed by thirteen observers. Results demonstrated scope to increase the noise of cardiac X-ray images by up to 21% +/- 8% before it is noticeable by clinical professionals. This indicates a potential for 21% radiation dose reduction since X-ray image noise and radiation dose are directly related; this would be beneficial to both patients and personnel.

  18. Improvement of material decomposition and image quality in dual-energy radiography by reducing image noise

    NASA Astrophysics Data System (ADS)

    Lee, D.; Kim, Y.-s.; Choi, S.; Lee, H.; Choi, S.; Jo, B. D.; Jeon, P.-H.; Kim, H.; Kim, D.; Kim, H.; Kim, H.-J.

    2016-08-01

    Although digital radiography has been widely used for screening human anatomical structures in clinical situations, it has several limitations due to anatomical overlapping. To resolve this problem, dual-energy imaging techniques, which provide a method for decomposing overlying anatomical structures, have been suggested as alternative imaging techniques. Previous studies have reported several dual-energy techniques, each resulting in different image qualities. In this study, we compared three dual-energy techniques: simple log subtraction (SLS), simple smoothing of a high-energy image (SSH), and anti-correlated noise reduction (ACNR) with respect to material thickness quantification and image quality. To evaluate dual-energy radiography, we conducted Monte Carlo simulation and experimental phantom studies. The Geant 4 Application for Tomographic Emission (GATE) v 6.0 and tungsten anode spectral model using interpolation polynomials (TASMIP) codes were used for simulation studies and digital radiography, and human chest phantoms were used for experimental studies. The results of the simulation study showed improved image contrast-to-noise ratio (CNR) and coefficient of variation (COV) values and bone thickness estimation accuracy by applying the ACNR and SSH methods. Furthermore, the chest phantom images showed better image quality with the SSH and ACNR methods compared to the SLS method. In particular, the bone texture characteristics were well-described by applying the SSH and ACNR methods. In conclusion, the SSH and ACNR methods improved the accuracy of material quantification and image quality in dual-energy radiography compared to SLS. Our results can contribute to better diagnostic capabilities of dual-energy images and accurate material quantification in various clinical situations.

  19. Effect of exercise supplementation on dipyridamole thallium-201 image quality

    SciTech Connect

    Stern, S.; Greenberg, I.D.; Corne, R. )

    1991-08-01

    To determine the effect of different types of exercise supplementation on dipyridamole thallium image quality, 78 patients were prospectively randomized to one of three protocols: dipyridamole infusion alone, dipyridamole supplemented with isometric handgrip, and dipyridamole with low-level treadmill exercise. Heart-to-lung, heart-to-liver, and heart-to-adjacent infradiaphragmatic activity ratios were generated from anterior images acquired immediately following the test. Additionally, heart-to-total infradiaphragmatic activity was graded semiquantitatively. Results showed a significantly higher ratio of heart to subdiaphragmatic activity in the treadmill group as compared with dipyridamole alone (p less than 0.001) and dipyridamole supplemented with isometric handgrip exercise (p less than 0.001). No significant difference was observed between patients receiving the dipyridamole infusion, and dipyridamole supplemented with isometric handgrip exercise. The authors conclude that low-level treadmill exercise supplementation of dipyridamole infusion is an effective means of improving image quality. Supplementation with isometric handgrip does not improve image quality over dipyridamole alone.

  20. Metal artifact reduction and image quality evaluation of lumbar spine CT images using metal sinogram segmentation.

    PubMed

    Kaewlek, Titipong; Koolpiruck, Diew; Thongvigitmanee, Saowapak; Mongkolsuk, Manus; Thammakittiphan, Sastrawut; Tritrakarn, Siri-on; Chiewvit, Pipat

    2015-01-01

    Metal artifacts often appear in the images of computed tomography (CT) imaging. In the case of lumbar spine CT images, artifacts disturb the images of critical organs. These artifacts can affect the diagnosis, treatment, and follow up care of the patient. One approach to metal artifact reduction is the sinogram completion method. A mixed-variable thresholding (MixVT) technique to identify the suitable metal sinogram is proposed. This technique consists of four steps: 1) identify the metal objects in the image by using k-mean clustering with the soft cluster assignment, 2) transform the image by separating it into two sinograms, one of which is the sinogram of the metal object, with the surrounding tissue shown in the second sinogram. The boundary of the metal sinogram is then found by the MixVT technique, 3) estimate the new value of the missing data in the metal sinogram by linear interpolation from the surrounding tissue sinogram, 4) reconstruct a modified sinogram by using filtered back-projection and complete the image by adding back the image of the metal object into the reconstructed image to form the complete image. The quantitative and clinical image quality evaluation of our proposed technique demonstrated a significant improvement in image clarity and detail, which enhances the effectiveness of diagnosis and treatment.

  1. Spread spectrum image watermarking based on perceptual quality metric.

    PubMed

    Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi

    2011-11-01

    Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.

  2. Quality assessment of butter cookies applying multispectral imaging.

    PubMed

    Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne

    2013-07-01

    A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4-16 min and 160-200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400-700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center.

  3. Study on classification of pork quality using hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Bai, Jun; Wang, Haibin

    2015-12-01

    The relative problems' research of chilled meat, thawed meat and spoiled meat discrimination by hyperspectral image technique were proposed, such the section of feature wavelengths, et al. First, based on 400 ~ 1000nm range hyperspectral image data of testing pork samples, by K-medoids clustering algorithm based on manifold distance, we select 30 important wavelengths from 753 wavelengths, and thus select 8 feature wavelengths (454.4, 477.5, 529.3, 546.8, 568.4, 580.3, 589.9 and 781.2nm) based on the discrimination value. Then 8 texture features of each image under 8 feature wavelengths were respectively extracted by two-dimensional Gabor wavelets transform as pork quality feature. Finally, we build a pork quality classification model using the fuzzy C-mean clustering algorithm. Through the experiment of extracting feature wavelengths, we found that although the hyperspectral images between adjacent bands have a strong linear correlation, they show a significant non-linear manifold relationship from the entire band. K-medoids clustering algorithm based on manifold distance used in this paper for selecting the characteristic wavelengths, which is more reasonable than traditional principal component analysis (PCA). Through the classification result, we conclude that hyperspectral imaging technology can distinguish among chilled meat, thawed meat and spoiled meat accurately.

  4. Characterization of image quality and image-guidance performance of a preclinical microirradiator

    SciTech Connect

    Clarkson, R.; Lindsay, P. E.; Ansell, S.; Wilson, G.; Jelveh, S.; Hill, R. P.; Jaffray, D. A.

    2011-02-15

    Purpose: To assess image quality and image-guidance capabilities of a cone-beam CT based small-animal image-guided irradiation unit (micro-IGRT). Methods: A micro-IGRT system has been developed in collaboration with the authors' laboratory as a means to study the radiobiological effects of conformal radiation dose distributions in small animals. The system, the X-Rad 225Cx, consists of a 225 kVp x-ray tube and a flat-panel amorphous silicon detector mounted on a rotational C-arm gantry and is capable of both fluoroscopic x-ray and cone-beam CT imaging, as well as image-guided placement of the radiation beams. Image quality (voxel noise, modulation transfer, CT number accuracy, and geometric accuracy characteristics) was assessed using water cylinder and micro-CT test phantoms. Image guidance was tested by analyzing the dose delivered to radiochromic films fixed to BB's through the end-to-end process of imaging, targeting the center of the BB, and irradiation of the film/BB in order to compare the offset between the center of the field and the center of the BB. Image quality and geometric studies were repeated over a 5-7 month period to assess stability. Results: CT numbers reported were found to be linear (R{sup 2}{>=}0.998) and the noise for images of homogeneous water phantom was 30 HU at imaging doses of approximately 1 cGy (to water). The presampled MTF at 50% and 10% reached 0.64 and 1.35 mm{sup -1}, respectively. Targeting accuracy by means of film irradiations was shown to have a mean displacement error of [{Delta}x,{Delta}y,{Delta}z]=[-0.12,-0.05,-0.02] mm, with standard deviations of [0.02, 0.20, 0.17] mm. The system has proven to be stable over time, with both the image quality and image-guidance performance being reproducible for the duration of the studies. Conclusions: The micro-IGRT unit provides soft-tissue imaging of small-animal anatomy at acceptable imaging doses ({<=}1 cGy). The geometric accuracy and targeting systems permit dose placement with

  5. Characterization of image quality and image-guidance performance of a preclinical microirradiator

    PubMed Central

    Clarkson, R.; Lindsay, P. E.; Ansell, S.; Wilson, G.; Jelveh, S.; Hill, R. P.; Jaffray, D. A.

    2011-01-01

    Purpose: To assess image quality and image-guidance capabilities of a cone-beam CT based small-animal image-guided irradiation unit (micro-IGRT). Methods: A micro-IGRT system has been developed in collaboration with the authors’ laboratory as a means to study the radiobiological effects of conformal radiation dose distributions in small animals. The system, the X-Rad 225Cx, consists of a 225 kVp x-ray tube and a flat-panel amorphous silicon detector mounted on a rotational C-arm gantry and is capable of both fluoroscopic x-ray and cone-beam CT imaging, as well as image-guided placement of the radiation beams. Image quality (voxel noise, modulation transfer, CT number accuracy, and geometric accuracy characteristics) was assessed using water cylinder and micro-CT test phantoms. Image guidance was tested by analyzing the dose delivered to radiochromic films fixed to BB’s through the end-to-end process of imaging, targeting the center of the BB, and irradiation of the film∕BB in order to compare the offset between the center of the field and the center of the BB. Image quality and geometric studies were repeated over a 5–7 month period to assess stability. Results: CT numbers reported were found to be linear (R2≥0.998) and the noise for images of homogeneous water phantom was 30 HU at imaging doses of approximately 1 cGy (to water). The presampled MTF at 50% and 10% reached 0.64 and 1.35 mm−1, respectively. Targeting accuracy by means of film irradiations was shown to have a mean displacement error of [Δx,Δy,Δz]=[−0.12,−0.05,−0.02] mm, with standard deviations of [0.02, 0.20, 0.17] mm. The system has proven to be stable over time, with both the image quality and image-guidance performance being reproducible for the duration of the studies. Conclusions: The micro-IGRT unit provides soft-tissue imaging of small-animal anatomy at acceptable imaging doses (≤1 cGy). The geometric accuracy and targeting systems permit dose placement with submillimeter

  6. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  7. DES exposure checker: Dark Energy Survey image quality control crowdsourcer

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Sheldon, Erin; Drlica-Wagner, Alex; Rykoff, Eli S.

    2015-11-01

    DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

  8. Achieving high research reporting quality through the use of computational ontologies.

    PubMed

    Zaveri, Amrapali; Cofiel, Luciana; Shah, Jatin; Pradhan, Shreyasee; Chan, Edwin; Dameron, Olivier; Pietrobon, Ricardo; Ang, Beng Ti

    2010-12-01

    Systematic reviews and meta-analyses constitute one of the central pillars of evidence-based medicine. However, clinical trials are poorly reported which delays meta-analyses and consequently the translation of clinical research findings to clinical practice. We propose a Center of Excellence in Research Reporting in Neurosurgery (CERR-N) and the creation of a clinically significant computational ontology to encode Randomized Controlled Trials (RCT) studies in neurosurgery. A 128 element strong computational ontology was derived from the Trial Bank ontology by omitting classes which were not required to perform meta-analysis. Three researchers from our team tagged five randomly selected RCT's each, published in the last 5 years (2004-2008), in the Journal of Neurosurgery (JoN), Neurosurgery Journal (NJ) and Journal of Neurotrauma (JoNT). We evaluated inter and intra observer reliability for the ontology using percent agreement and kappa coefficient. The inter-observer agreement was 76.4%, 75.97% and 74.9% and intra-observer agreement was 89.8%, 80.8% and 86.56% for JoN, NJ and JoNT respectively. The inter-observer kappa coefficient was 0.60, 0.54 and 0.53 and the intra-observer kappa coefficient was 0.79, 0.82 and 0.79 for JoN, NJ and JoNT journals respectively. The high degree of inter and intra-observer agreement confirms tagging consistency in sections of a given scientific manuscript. Standardizing reporting for neurosurgery articles can be reliably achieved through the integration of a computational ontology within the context of a CERR-N. This approach holds potential for the overall improvement in the quality of reporting of RCTs in neurosurgery, ultimately streamlining the translation of clinical research findings to improvement in patient care. PMID:20953737

  9. Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images

    NASA Astrophysics Data System (ADS)

    Farnand, Susan; Jiang, Jun; Frey, Franziska

    2011-01-01

    A project, supported by the Andrew W. Mellon Foundation, is currently underway to evaluate current practices in fine art image reproduction, determine the image quality generally achievable, and establish a suggested framework for art image interchange. To determine the image quality currently being achieved, experimentation has been conducted in which a set of objective targets and pieces of artwork in various media were imaged by participating museums and other cultural heritage institutions. Prints and images for display made from the delivered image files at the Rochester Institute of Technology were used as stimuli in psychometric testing in which observers were asked to evaluate the prints as reproductions of the original artwork and as stand alone images. The results indicated that there were limited differences between assessments made using displayed images relative to printed reproductions. Further, the differences between rankings made with and without the original artwork present were much smaller than expected.

  10. Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base

    PubMed Central

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146

  11. Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.

    PubMed

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146

  12. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    SciTech Connect

    Tseng, Hsin-Wu Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-07-15

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  13. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    NASA Astrophysics Data System (ADS)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  14. Exploring V1 by modeling the perceptual quality of images.

    PubMed

    Zhang, Fan; Jiang, Wenfei; Autrusseau, Florent; Lin, Weisi

    2014-01-24

    We propose an image quality model based on phase and amplitude differences between a reference and a distorted image. The proposed model is motivated by the fact that polar representations can separate visual information in a more independent and efficient manner than Cartesian representations in the primary visual cortex (V1). We subsequently estimate the model parameters from a large subjective data set using maximum likelihood methods. By comparing the various model hypotheses on the functional form about the phase and amplitude, we find that: (a) discrimination of visual orientation is important for quality assessment and yet a coarse level of such discrimination seems sufficient; and (b) a product-based amplitude-phase combination before pooling is effective, suggesting an interesting viewpoint about the functional structure of the simple cells and complex cells in V1.

  15. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    PubMed Central

    Xu, Shiyu; Lu, Jianping; Zhou, Otto; Chen, Ying

    2015-01-01

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair based prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications. PMID:26328987

  16. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  17. Head Start Program Quality: Examination of Classroom Quality and Parent Involvement in Predicting Children's Vocabulary, Literacy, and Mathematics Achievement Trajectories

    ERIC Educational Resources Information Center

    Wen, Xiaoli; Bulotsky-Shearer, Rebecca J.; Hahs-Vaughn, Debbie L.; Korfmacher, Jon

    2012-01-01

    Guided by a developmental-ecological framework and Head Start's two-generational approach, this study examined two dimensions of Head Start program quality, classroom quality and parent involvement and their unique and interactive contribution to children's vocabulary, literacy, and mathematics skills growth from the beginning of Head Start…

  18. A quality assurance program for image quality of cone-beam CT guidance in radiation therapy

    SciTech Connect

    Bissonnette, Jean-Pierre; Moseley, Douglas J.; Jaffray, David A.

    2008-05-15

    The clinical introduction of volumetric x-ray image-guided radiotherapy systems necessitates formal commissioning of the hardware and image-guided processes to be used and drafts quality assurance (QA) for both hardware and processes. Satisfying both requirements provides confidence on the system's ability to manage geometric variations in patient setup and internal organ motion. As these systems become a routine clinical modality, the authors present data from their QA program tracking the image quality performance of ten volumetric systems over a period of 3 years. These data are subsequently used to establish evidence-based tolerances for a QA program. The volumetric imaging systems used in this work combines a linear accelerator with conventional x-ray tube and an amorphous silicon flat-panel detector mounted orthogonally from the accelerator central beam axis, in a cone-beam computed tomography (CBCT) configuration. In the spirit of the AAPM Report No. 74, the present work presents the image quality portion of their QA program; the aspects of the QA protocol addressing imaging geometry have been presented elsewhere. Specifically, the authors are presenting data demonstrating the high linearity of CT numbers, the uniformity of axial reconstructions, and the high contrast spatial resolution of ten CBCT systems (1-2 mm) from two commercial vendors. They are also presenting data accumulated over the period of several months demonstrating the long-term stability of the flat-panel detector and of the distances measured on reconstructed volumetric images. Their tests demonstrate that each specific CBCT system has unique performance. In addition, scattered x rays are shown to influence the imaging performance in terms of spatial resolution, axial reconstruction uniformity, and the linearity of CT numbers.

  19. Digital TV image quality improvement considering distributions of edge characteristic

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Gi; Kim, Jae-Chul; Park, Jong-Hyun

    2003-12-01

    Sharpness enhancement is widely used technique for improving the perceptual quality of an image by emphasizing its high-frequency component. In this paper, a psychophysical experiment is conducted by the 20 observers with simple linear unsharp masking for sharpness enhancement. The experimental result is extracted using z-score analysis and linear regression. Finally using this result we suggest observer preferable sharpness enhancement method for digital television.

  20. Factors Affecting Image Quality in Near-field Ultra-wideband Radar Imaging for Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Curtis, Charlotte

    Near-field ultra-wideband radar imaging has potential as a new breast imaging modality. While a number of reconstruction algorithms have been published with the goal of reducing undesired responses or clutter, an in-depth analysis of the dominant sources of clutter has not been conducted. In this thesis, time domain radar image reconstruction is demonstrated to be equivalent to frequency domain synthetic aperture radar. This reveals several assumptions inherent to the reconstruction algorithm related to radial spreading, point source antennas, and the independent summation of point scatterers. Each of these assumptions is examined in turn to determine which has the greatest impact on the resulting image quality and interpretation. In addition, issues related to heterogeneous and dispersive media are addressed. Variations in imaging parameters are tested by observing their influence on the system point spread function. Results are then confirmed by testing on simple and detailed simulation models, followed by data acquired from human volunteers. Recommended parameters are combined into a new imaging operator that is demonstrated to generate results comparable to a more accurate signal model, but with a 50 fold improvement in computational efficiency. Finally, the most significant factor affecting image quality is determined to be the estimate of tissue properties used to form the image.

  1. Incorporating detection tasks into the assessment of CT image quality

    NASA Astrophysics Data System (ADS)

    Scalzetti, E. M.; Huda, W.; Ogden, K. M.; Khan, M.; Roskopf, M. L.; Ogden, D.

    2006-03-01

    The purpose of this study was to compare traditional and task dependent assessments of CT image quality. Chest CT examinations were obtained with a standard protocol for subjects participating in a lung cancer-screening project. Images were selected for patients whose weight ranged from 45 kg to 159 kg. Six ABR certified radiologists subjectively ranked these images using a traditional six-point ranking scheme that ranged from 1 (inadequate) to 6 (excellent). Three subtle diagnostic tasks were identified: (1) a lung section containing a sub-centimeter nodule of ground-glass opacity in an upper lung (2) a mediastinal section with a lymph node of soft tissue density in the mediastinum; (3) a liver section with a rounded low attenuation lesion in the liver periphery. Each observer was asked to estimate the probability of detecting each type of lesion in the appropriate CT section using a six-point scale ranging from 1 (< 10%) to 6 (> 90%). Traditional and task dependent measures of image quality were plotted as a function of patient weight. For the lung section, task dependent evaluations were very similar to those obtained using the traditional scoring scheme, but with larger inter-observer differences. Task dependent evaluations for the mediastinal section showed no obvious trend with subject weight, whereas there the traditional score decreased from ~4.9 for smaller subjects to ~3.3 for the larger subjects. Task dependent evaluations for the liver section showed a decreasing trend from ~4.1 for the smaller subjects to ~1.9 for the larger subjects, whereas the traditional evaluation had a markedly narrower range of scores. A task-dependent method of assessing CT image quality can be implemented with relative ease, and is likely to be more meaningful in the clinical setting.

  2. Image quality criteria for wide-field x-ray imaging applications

    NASA Astrophysics Data System (ADS)

    Thompson, Patrick L.; Harvey, James E.

    1999-10-01

    For staring, wide-field applications, such as a solar x-ray imager, the severe off-axis aberrations of the classical Wolter Type-I grazing incidence x-ray telescope design drastically limits the 'resolution' near the solar limb. A specification upon on-axis fractional encircled energy is thus not an appropriate image quality criterion for such wide-angle applications. A more meaningful image quality criterion would be a field-weighted-average measure of 'resolution.' Since surface scattering effects from residual optical fabrication errors are always substantial at these very short wavelengths, the field-weighted-average half- power radius is a far more appropriate measure of aerial resolution. If an ideal mosaic detector array is being used in the focal plane, the finite pixel size provides a practical limit to this system performance. Thus, the total number of aerial resolution elements enclosed by the operational field-of-view, expressed as a percentage of the n umber of ideal detector pixels, is a further improved image quality criterion. In this paper we describe the development of an image quality criterion for wide-field applications of grazing incidence x-ray telescopes which leads to a new class of grazing incidence designs described in a following companion paper.

  3. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  4. The Role of Self-Image on Reading Rate and Comprehension Achievement.

    ERIC Educational Resources Information Center

    Brown, James I.; McDowell, Earl E.

    1979-01-01

    Reports that students in a college reading efficiency course who had high self-images read significantly faster than those with low self-images, that students with initially high self-images did not maintain those images, that males had higher self-images and read faster than did females, and that there were negative relationships between speed…

  5. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  6. Effects of characteristics of image quality in an immersive environment

    NASA Technical Reports Server (NTRS)

    Duh, Henry Been-Lirn; Lin, James J W.; Kenyon, Robert V.; Parker, Donald E.; Furness, Thomas A.

    2002-01-01

    Image quality issues such as field of view (FOV) and resolution are important for evaluating "presence" and simulator sickness (SS) in virtual environments (VEs). This research examined effects on postural stability of varying FOV, image resolution, and scene content in an immersive visual display. Two different scenes (a photograph of a fountain and a simple radial pattern) at two different resolutions were tested using six FOVs (30, 60, 90, 120, 150, and 180 deg.). Both postural stability, recorded by force plates, and subjective difficulty ratings varied as a function of FOV, scene content, and image resolution. Subjects exhibited more balance disturbance and reported more difficulty in maintaining posture in the wide-FOV, high-resolution, and natural scene conditions.

  7. ECG-synchronized DSA exposure control: improved cervicothoracic image quality

    SciTech Connect

    Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.

    1984-10-01

    An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.

  8. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  9. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  10. Image gathering and digital restoration for fidelity and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1991-01-01

    The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.

  11. Characterization of image quality for 3D scatter-corrected breast CT images

    NASA Astrophysics Data System (ADS)

    Pachon, Jan H.; Shah, Jainil; Tornai, Martin P.

    2011-03-01

    The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis, and followed by a human observer detection task for the spheres in the different concentric rings. Results show that scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing breast conditions improves overall image quality.

  12. Evaluation of scatter effects on image quality for breast tomosynthesis

    SciTech Connect

    Wu Gang; Mainprize, James G.; Boone, John M.; Yaffe, Martin J.

    2009-10-15

    Digital breast tomosynthesis uses a limited number (typically 10-20) of low-dose x-ray projections to produce a pseudo-three-dimensional volume tomographic reconstruction of the breast. The purpose of this investigation was to characterize and evaluate the effect of scattered radiation on the image quality for breast tomosynthesis. In a simulation, scatter point spread functions generated by a Monte Carlo simulation method were convolved over the breast projection to estimate the distribution of scatter for each angle of tomosynthesis projection. The results demonstrate that in the absence of scatter reduction techniques, images will be affected by cupping artifacts, and there will be reduced accuracy of attenuation values inferred from the reconstructed images. The effect of x-ray scatter on the contrast, noise, and lesion signal-difference-to-noise ratio (SDNR) in tomosynthesis reconstruction was measured as a function of the tumor size. When a with-scatter reconstruction was compared to one without scatter for a 5 cm compressed breast, the following results were observed. The contrast in the reconstructed central slice image of a tumorlike mass (14 mm in diameter) was reduced by 30%, the voxel value (inferred attenuation coefficient) was reduced by 28%, and the SDNR fell by 60%. The authors have quantified the degree to which scatter degrades the image quality over a wide range of parameters relevant to breast tomosynthesis, including x-ray beam energy, breast thickness, breast diameter, and breast composition. They also demonstrate, though, that even without a scatter rejection device, the contrast and SDNR in the reconstructed tomosynthesis slice are higher than those of conventional mammographic projection images acquired with a grid at an equivalent total exposure.

  13. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  14. SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes

    SciTech Connect

    Nelson, G

    2015-06-15

    Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth, Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.

  15. Image Quality of the Helioseismic and Magnetic Imager (HMI) Onboard the Solar Dynamics Observatory (SDO)

    NASA Technical Reports Server (NTRS)

    Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.

    2011-01-01

    We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.

  16. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: validation on phantoms.

    PubMed

    Bai, Mei; Chen, Jiuhong; Raupach, Rainer; Suess, Christoph; Tao, Ying; Peng, Mingchen

    2009-01-01

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P > 0.05), whereas noise was reduced (P < 0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P > 0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  17. Effect of nonlinear three-dimensional optimized reconstruction algorithm filter on image quality and radiation dose: Validation on phantoms

    SciTech Connect

    Bai Mei; Chen Jiuhong; Raupach, Rainer; Suess, Christoph; Tao Ying; Peng Mingchen

    2009-01-15

    A new technique called the nonlinear three-dimensional optimized reconstruction algorithm filter (3D ORA filter) is currently used to improve CT image quality and reduce radiation dose. This technical note describes the comparison of image noise, slice sensitivity profile (SSP), contrast-to-noise ratio, and modulation transfer function (MTF) on phantom images processed with and without the 3D ORA filter, and the effect of the 3D ORA filter on CT images at a reduced dose. For CT head scans the noise reduction was up to 54% with typical bone reconstruction algorithms (H70) and a 0.6 mm slice thickness; for liver CT scans the noise reduction was up to 30% with typical high-resolution reconstruction algorithms (B70) and a 0.6 mm slice thickness. MTF and SSP did not change significantly with the application of 3D ORA filtering (P>0.05), whereas noise was reduced (P<0.05). The low contrast detectability and MTF of images obtained at a reduced dose and filtered by the 3D ORA were equivalent to those of standard dose CT images; there was no significant difference in image noise of scans taken at a reduced dose, filtered using 3D ORA and standard dose CT (P>0.05). The 3D ORA filter shows good potential for reducing image noise without affecting image quality attributes such as sharpness. By applying this approach, the same image quality can be achieved whilst gaining a marked dose reduction.

  18. Mathematics Achievement among Secondary Students in Relation to Enrollment/Nonenrollment in Music Programs of Differing Content or Quality

    ERIC Educational Resources Information Center

    Van der Vossen, Maria R.

    2012-01-01

    This causal-comparative study examined the relationship between enrollment/non-enrollment in music programs of differing content or quality and mathematical achievement among 739 secondary (grades 8-12) students from four different Maryland counties. The students, both female and male, were divided into sample groups by their participation in a…

  19. Training Needs for Faculty Members: Towards Achieving Quality of University Education in the Light of Technological Innovations

    ERIC Educational Resources Information Center

    Abouelenein, Yousri Attia Mohamed

    2016-01-01

    The aim of this study was to identify training needs of university faculty members, in order to achieve the desired quality in the light of technological innovations. A list of training needs of faculty members was developed in terms of technological innovations in general, developing skills of faculty members in the use of technological…

  20. Does Higher Quality Early Child Care Promote Low-Income Children's Math and Reading Achievement in Middle Childhood?

    ERIC Educational Resources Information Center

    Dearing, Eric; McCartney, Kathleen; Taylor, Beck A.

    2009-01-01

    Higher quality child care during infancy and early childhood (6-54 months of age) was examined as a moderator of associations between family economic status and children's (N = 1,364) math and reading achievement in middle childhood (4.5-11 years of age). Low income was less strongly predictive of underachievement for children who had been in…

  1. Bhutanese Stakeholders' Perceptions about Multi-Grade Teaching as a Strategy for Achieving Quality Universal Primary Education

    ERIC Educational Resources Information Center

    Kucita, Pawan; Kivunja, Charles; Maxwell, T. W.; Kuyini, Bawa

    2013-01-01

    This study employed document analysis and qualitative interviews to explore the perceptions of different Bhutanese stakeholders about multi-grade teaching, which the Bhutanese Government identified as a strategy for achieving quality Universal Primary Education. The data from Ministry officials, teachers and student teachers were analyzed using…

  2. A Multilevel Analysis of the Role of School Quality and Family Background on Students' Mathematics Achievement in the Middle East

    ERIC Educational Resources Information Center

    Kareshki, Hossein; Hajinezhad, Zahra

    2014-01-01

    The purpose of the present study is investigating the correlation between school quality and family socioeconomic background and students' mathematics achievement in the Middle East. The countries in comparison are UAE, Syria, Qatar, Iran, Saudi Arabia, Oman, Lebanon, Jordan, and Bahrain. The study utilized data from IEA's Trends in International…

  3. Homework Works If Homework Quality Is High: Using Multilevel Modeling to Predict the Development of Achievement in Mathematics

    ERIC Educational Resources Information Center

    Dettmers, Swantje; Trautwein, Ulrich; Ludtke, Oliver; Kunter, Mareike; Baumert, Jurgen

    2010-01-01

    The present study examined the associations of 2 indicators of homework quality (homework selection and homework challenge) with homework motivation, homework behavior, and mathematics achievement. Multilevel modeling was used to analyze longitudinal data from a representative national sample of 3,483 students in Grades 9 and 10; homework effects…

  4. The Perception of Preservice Mathematics Teachers on the Role of Scaffolding in Achieving Quality Mathematics Classroom Instruction

    ERIC Educational Resources Information Center

    Bature, Iliya Joseph; Jibrin, Adamu Gagdi

    2015-01-01

    This paper was designed to investigate the perceptions of four preservice mathematics teachers on the role of scaffolding in supporting and assisting them achieves quality classroom teaching. A collaborative approach to teaching through a community of practice was used to obtain data for the three research objectives that were postulated. Two…

  5. Restoration of images degraded by underwater turbulence using structure tensor oriented image quality (STOIQ) metric.

    PubMed

    Kanaev, A V; Hou, W; Restaino, S R; Matt, S; Gładysz, S

    2015-06-29

    Recent advances in image processing for atmospheric propagation have provided a foundation for tackling the similar but perhaps more complex problem of underwater imaging, which is impaired by scattering and optical turbulence. As a result of these impairments underwater imagery suffers from excessive noise, blur, and distortion. Underwater turbulence impact on light propagation becomes critical at longer distances as well as near thermocline and mixing layers. In this work, we demonstrate a method for restoration of underwater images that are severely degraded by underwater turbulence. The key element of the approach is derivation of a structure tensor oriented image quality metric, which is subsequently incorporated into a lucky patch image processing framework. The utility of the proposed image quality measure guided by local edge strength and orientation is emphasized by comparing the restoration results to an unsuccessful restoration obtained with equivalent processing utilizing a standard isotropic metric. Advantages of the proposed approach versus three other state-of-the-art image restoration techniques are demonstrated using the data obtained in the laboratory water tank and in a natural environment underwater experiment. Quantitative comparison of the restoration results is performed via structural similarity index measure and normalized mutual information metric.

  6. Restoration of images degraded by underwater turbulence using structure tensor oriented image quality (STOIQ) metric.

    PubMed

    Kanaev, A V; Hou, W; Restaino, S R; Matt, S; Gładysz, S

    2015-06-29

    Recent advances in image processing for atmospheric propagation have provided a foundation for tackling the similar but perhaps more complex problem of underwater imaging, which is impaired by scattering and optical turbulence. As a result of these impairments underwater imagery suffers from excessive noise, blur, and distortion. Underwater turbulence impact on light propagation becomes critical at longer distances as well as near thermocline and mixing layers. In this work, we demonstrate a method for restoration of underwater images that are severely degraded by underwater turbulence. The key element of the approach is derivation of a structure tensor oriented image quality metric, which is subsequently incorporated into a lucky patch image processing framework. The utility of the proposed image quality measure guided by local edge strength and orientation is emphasized by comparing the restoration results to an unsuccessful restoration obtained with equivalent processing utilizing a standard isotropic metric. Advantages of the proposed approach versus three other state-of-the-art image restoration techniques are demonstrated using the data obtained in the laboratory water tank and in a natural environment underwater experiment. Quantitative comparison of the restoration results is performed via structural similarity index measure and normalized mutual information metric. PMID:26191716

  7. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  8. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  9. Patient dose and image quality from mega-voltage cone beam computed tomography imaging.

    PubMed

    Gayou, Olivier; Parda, David S; Johnson, Mark; Miften, Moyed

    2007-02-01

    The evolution of ever more conformal radiation delivery techniques makes the subject of accurate localization of increasing importance in radiotherapy. Several systems can be utilized including kilo-voltage and mega-voltage cone-beam computed tomography (MV-CBCT), CT on rail or helical tomography. One of the attractive aspects of mega-voltage cone-beam CT is that it uses the therapy beam along with an electronic portal imaging device to image the patient prior to the delivery of treatment. However, the use of a photon beam energy in the mega-voltage range for volumetric imaging degrades the image quality and increases the patient radiation dose. To optimize image quality and patient dose in MV-CBCT imaging procedures, a series of dose measurements in cylindrical and anthropomorphic phantoms using an ionization chamber, radiographic films, and thermoluminescent dosimeters was performed. Furthermore, the dependence of the contrast to noise ratio and spatial resolution of the image upon the dose delivered for a 20-cm-diam cylindrical phantom was evaluated. Depending on the anatomical site and patient thickness, we found that the minimum dose deposited in the irradiated volume was 5-9 cGy and the maximum dose was between 9 and 17 cGy for our clinical MV-CBCT imaging protocols. Results also demonstrated that for high contrast areas such as bony anatomy, low doses are sufficient for image registration and visualization of the three-dimensional boundaries between soft tissue and bony structures. However, as the difference in tissue density decreased, the dose required to identify soft tissue boundaries increased. Finally, the dose delivered by MV-CBCT was simulated using a treatment planning system (TPS), thereby allowing the incorporation of MV-CBCT dose in the treatment planning process. The TPS-calculated doses agreed well with measurements for a wide range of imaging protocols.

  10. Does High Quality Childcare Narrow the Achievement Gap at Two Years of Age?

    ERIC Educational Resources Information Center

    Ruzek, Erik; Burchinal, Margaret; Farkas, George; Duncan, Greg; Dang, Tran; Lee, Weilin

    2011-01-01

    The authors use the ECLS-B, a nationally-representative study of children born in 2001 to report the child care arrangements and quality characteristics for 2-year olds in the United States and to estimate the effects of differing levels of child care quality on two-year old children's cognitive development. Their goal is to test whether high…

  11. Using full-reference image quality metrics for automatic image sharpening

    NASA Astrophysics Data System (ADS)

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš

    2014-05-01

    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  12. Derivation of the scan time requirement for maintaining a consistent PET image quality

    NASA Astrophysics Data System (ADS)

    Kim, Jin Su; Lee, Jae Sung; Kim, Seok-Ki

    2015-05-01

    Objectives: the image quality of PET for larger patients is relatively poor, even though the injection dose is optimized considering the NECR characteristics of the PET scanner. This poor image quality is due to the lower level of maximum NECR that can be achieved in these large patients. The aim of this study was to optimize the PET scan time to obtain a consistent PET image quality regardless of the body size, based on the relationship between the patient specific NECR (pNECR) and body weight. Methods: eighty patients (M/F=53/27, body weight: 059 ± 1 kg) underwent whole-body FDG PET scans using a Philips GEMINI GS PET/CT scanner after an injection of 0.14 mCi/kg FDG. The relationship between the scatter fraction (SF) and body weight was determined by repeated Monte Carlo simulations using a NEMA scatter phantom, the size of which varied according to the relationship between the abdominal circumference and body weight. Using this information, the pNECR was calculated from the prompt and delayed PET sinograms to obtain the prediction equation of NECR vs. body weight. The time scaling factor (FTS) for the scan duration was finally derived to make PET images with equivalent SNR levels. Results: the SF and NECR had the following nonlinear relationships with the body weight: SF=0.15 ṡ body weight0.3 and NECR = 421.36 (body weight)-0.84. The equation derived for FTS was 0.01ṡ body weight + 0.2, which means that, for example, a 120-kg person should be scanned 1.8 times longer than a 70 kg person, or the scan time for a 40-kg person can be reduced by 30%. Conclusion: the equation of the relative time demand derived in this study will be useful for maintaining consistent PET image quality in clinics.

  13. High-volume image quality assessment systems: tuning performance with an interactive data visualization tool

    NASA Astrophysics Data System (ADS)

    Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael

    1999-03-01

    Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality

  14. Image quality evaluation of breast tomosynthesis with synchrotron radiation

    SciTech Connect

    Malliori, A.; Bliznakova, K.; Speller, R. D.; Horrocks, J. A.; Rigon, L.; Tromba, G.; Pallikarakis, N.

    2012-09-15

    Purpose: This study investigates the image quality of tomosynthesis slices obtained from several acquisition sets with synchrotron radiation using a breast phantom incorporating details that mimic various breast lesions, in a heterogeneous background. Methods: A complex Breast phantom (MAMMAX) with a heterogeneous background and thickness that corresponds to 4.5 cm compressed breast with an average composition of 50% adipose and 50% glandular tissue was assembled using two commercial phantoms. Projection images using acquisition arcs of 24 Degree-Sign , 32 Degree-Sign , 40 Degree-Sign , 48 Degree-Sign , and 56 Degree-Sign at incident energy of 17 keV were obtained from the phantom with the synchrotron radiation for medical physics beamline at ELETTRA Synchrotron Light Laboratory. The total mean glandular dose was set equal to 2.5 mGy. Tomograms were reconstructed with simple multiple projection algorithm (MPA) and filtered MPA. In the latter case, a median filter, a sinc filter, and a combination of those two filters were applied on the experimental data prior to MPA reconstruction. Visual inspection, contrast to noise ratio, contrast, and artifact spread function were the figures of merit used in the evaluation of the visualisation and detection of low- and high-contrast breast features, as a function of the reconstruction algorithm and acquisition arc. To study the benefits of using monochromatic beams, single projection images at incident energies ranging from 14 to 27 keV were acquired with the same phantom and weighted to synthesize polychromatic images at a typical incident x-ray spectrum with W target. Results: Filters were optimised to reconstruct features with different attenuation characteristics and dimensions. In the case of 6 mm low-contrast details, improved visual appearance as well as higher contrast to noise ratio and contrast values were observed for the two filtered MPA algorithms that exploit the sinc filter. These features are better visualized

  15. Using collective expert judgements to evaluate quality measures of mass spectrometry images

    PubMed Central

    Palmer, Andrew; Ovchinnikova, Ekaterina; Thuné, Mikael; Lavigne, Régis; Guével, Blandine; Dyatlov, Andrey; Vitek, Olga; Pineau, Charles; Borén, Mats; Alexandrov, Theodore

    2015-01-01

    Motivation: Imaging mass spectrometry (IMS) is a maturating technique of molecular imaging. Confidence in the reproducible quality of IMS data is essential for its integration into routine use. However, the predominant method for assessing quality is visual examination, a time consuming, unstandardized and non-scalable approach. So far, the problem of assessing the quality has only been marginally addressed and existing measures do not account for the spatial information of IMS data. Importantly, no approach exists for unbiased evaluation of potential quality measures. Results: We propose a novel approach for evaluating potential measures by creating a gold-standard set using collective expert judgements upon which we evaluated image-based measures. To produce a gold standard, we engaged 80 IMS experts, each to rate the relative quality between 52 pairs of ion images from MALDI-TOF IMS datasets of rat brain coronal sections. Experts’ optional feedback on their expertise, the task and the survey showed that (i) they had diverse backgrounds and sufficient expertise, (ii) the task was properly understood, and (iii) the survey was comprehensible. A moderate inter-rater agreement was achieved with Krippendorff’s alpha of 0.5. A gold-standard set of 634 pairs of images with accompanying ratings was constructed and showed a high agreement of 0.85. Eight families of potential measures with a range of parameters and statistical descriptors, giving 143 in total, were evaluated. Both signal-to-noise and spatial chaos-based measures performed highly with a correlation of 0.7 to 0.9 with the gold standard ratings. Moreover, we showed that a composite measure with the linear coefficients (trained on the gold standard with regularized least squares optimization and lasso) showed a strong linear correlation of 0.94 and an accuracy of 0.98 in predicting which image in a pair was of higher quality. Availability and implementation: The anonymized data collected from the survey

  16. A hyperspectral imaging prototype for online quality evaluation of pickling cucumbers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A hyperspectral imaging prototype was developed for online evaluation of external and internal quality of pickling cucumbers. The prototype had several new, unique features including simultaneous reflectance and transmittance imaging and inline, real time calibration of hyperspectral images of each ...

  17. Automating PACS Quality Control with the Vanderbilt Image Processing Enterprise Resource.

    PubMed

    Esparza, Michael L; Welch, E Brian; Landman, Bennett A

    2012-02-12

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption-for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and wide-variety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource-VIPER, to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  18. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  19. Assessment of image quality in x-ray radiography imaging using a small plasma focus device

    NASA Astrophysics Data System (ADS)

    Kanani, A.; Shirani, B.; Jabbari, I.; Mokhtari, J.

    2014-08-01

    This paper offers a comprehensive investigation of image quality parameters for a small plasma focus as a pulsed hard x-ray source for radiography applications. A set of images were captured from some metal objects and electronic circuits using a low energy plasma focus at different voltages of capacitor bank and different pressures of argon gas. The x-ray source focal spot of this device was obtained to be about 0.6 mm using the penumbra imaging method. The image quality was studied by several parameters such as image contrast, line spread function (LSF) and modulation transfer function (MTF). Results showed that the contrast changes by variations in gas pressure. The best contrast was obtained at a pressure of 0.5 mbar and 3.75 kJ stored energy. The results of x-ray dose from the device showed that about 0.6 mGy is sufficient to obtain acceptable images on the film. The measurements of LSF and MTF parameters were carried out by means of a thin stainless steel wire 0.8 mm in diameter and the cut-off frequency was obtained to be about 1.5 cycles/mm.

  20. Holographic imaging of crowded fields: high angular resolution imaging with excellent quality at very low cost

    NASA Astrophysics Data System (ADS)

    Schödel, R.; Yelda, S.; Ghez, A.; Girard, J. H.; Labadie, L.; Rebolo, R.; Pérez-Garrido, A.; Morris, M. R.

    2013-02-01

    We present a method for speckle holography that is optimized for crowded fields. Its two key features are an iterative improvement of the instantaneous point spread functions (PSFs) extracted from each speckle frame and the (optional) simultaneous use of multiple reference stars. In this way, high signal-to-noise ratio and accuracy can be achieved on the PSF for each short exposure, which results in sensitive, high-Strehl reconstructed images. We have tested our method with different instruments, on a range of targets, and from the N[10 μm] to the I[0.9 μm] band. In terms of PSF cosmetics, stability and Strehl ratio, holographic imaging can be equal, and even superior, to the capabilities of currently available adaptive optics (AO) systems, particularly at short near-infrared to optical wavelengths. It outperforms lucky imaging because it makes use of the entire PSF and reduces the need for frame selection, thus, leading to higher Strehl and improved sensitivity. Image reconstruction a posteriori, the possibility to use multiple reference stars and the fact that these reference stars can be rather faint means that holographic imaging offers a simple way to image large, dense stellar fields near the diffraction limit of large telescopes, similar to, but much less technologically demanding than, the capabilities of a multiconjugate AO system. The method can be used with a large range of already existing imaging instruments and can also be combined with AO imaging when the corrected PSF is unstable.

  1. The quality transformation: A catalyst for achieving energy`s strategic vision

    SciTech Connect

    1995-01-01

    This plan describes the initial six corporate quality goals for DOE. It also includes accompanying performance measures which will help DOE determine progress towards meeting these goals. The six goals are: (1) There is effective use of performance measurement based on regular assessment of Energy operations using the Presidential Award for Quality, the Malcolm Baldrige National Quality Award, or equivalent criteria. (2) All managers champion continuous quality improvement training for all employees through planning, attendance, and active application. (3) The Department leadership has provided the environment in which employees are enabled to satisfy customer requirements and realize their full potential. (4) The Department management practices foster employee involvement, development and recognition. (5) The Department continuously improves customer service and satisfaction, and internal and external customers recognize Energy as an excellent service provider. (6) The Department has a system which aligns strategic and operational planning with strategic intent, ensures this planning drives resource allocation, provides for regular evaluation of results, and provides feedback.

  2. How to achieve comprehensive teamwork in your pediatric dental office through total quality management (TQM).

    PubMed

    Nacht, E S

    1995-01-01

    Teamwork is considered the key to success in any organization. Total Quality Management (TQM) is a systematic process that optimizes quality and copes with change. The process includes customer satisfaction, as the ultimate criterion of quality; worker empowerment; and reliance on statistical process, rather than inspection and continuous improvement. Managers not only adopt new techniques but also a new philosophy, rethinking their roles to become leaders and mentors instead of bosses. Fourteen points, developed during several decades, are considered the cornerstone of quality and teamwork, and they can apply to what we do in the pediatric dental office. Together, these principles should be viewed as an ongoing process that can take up to five years or more and should be managed with the same emphasis as other systems in your office. The dentist, or leader, implements and takes part in the process, which will improve morale by increasing team spirit.

  3. Image quality in CT: From physical measurements to model observers.

    PubMed

    Verdun, F R; Racine, D; Ott, J G; Tapiovaara, M J; Toroi, P; Bochud, F O; Veldkamp, W J H; Schegerer, A; Bouwman, R W; Giron, I Hernandez; Marshall, N W; Edyvean, S

    2015-12-01

    Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.

  4. Image quality in CT: From physical measurements to model observers.

    PubMed

    Verdun, F R; Racine, D; Ott, J G; Tapiovaara, M J; Toroi, P; Bochud, F O; Veldkamp, W J H; Schegerer, A; Bouwman, R W; Giron, I Hernandez; Marshall, N W; Edyvean, S

    2015-12-01

    Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment. PMID:26459319

  5. Spectral CT imaging in patients with Budd-Chiari syndrome: investigation of image quality.

    PubMed

    Su, Lei; Dong, Junqiang; Sun, Qiang; Liu, Jie; Lv, Peijie; Hu, Lili; Yan, Liangliang; Gao, Jianbo

    2014-11-01

    To assess the image quality of monochromatic imaging from spectral CT in patients with Budd-Chiari syndrome (BCS), fifty patients with BCS underwent spectral CT to generate conventional 140 kVp polychromatic images (group A) and monochromatic images, with energy levels from 40 to 80, 40 + 70, and 50 + 70 keV fusion images (group B) during the portal venous phase (PVP) and the hepatic venous phase (HVP). Two-sample t tests compared vessel-to-liver contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) for the portal vein (PV), hepatic vein (HV), inferior vena cava. Readers' subjective evaluations of the image quality were recorded. The highest SNR values in group B were distributed at 50 keV; the highest CNR values in group B were distributed at 40 keV. The higher CNR values and SNR values were obtained though PVP of PV (SNR 18.39 ± 6.13 vs. 10.56 ± 3.31, CNR 7.81 ± 3.40 vs. 3.58 ± 1.31) and HVP of HV (3.89 ± 2.08 vs. 1.27 ± 1.55) in the group B; the lower image noise for group B was at 70 keV and 50 + 70 keV (15.54 ± 8.39 vs. 18.40 ± 4.97, P = 0.0004 and 18.97 ± 7.61 vs. 18.40 ± 4.97, P = 0.0691); the results show that the 50 + 70 keV fusion image quality was better than that in group A. Monochromatic energy levels of 40-70, 40 + 70, and 50 + 70 keV fusion image can increase vascular contrast and that will be helpful for the diagnosis of BCS, we select the 50 + 70 keV fusion image to acquire the best BCS images.

  6. Automated techniques for quality assurance of radiological image modalities

    NASA Astrophysics Data System (ADS)

    Goodenough, David J.; Atkins, Frank B.; Dyer, Stephen M.

    1991-05-01

    This paper will attempt to identify many of the important issues for quality assurance (QA) of radiological modalities. It is of course to be realized that QA can span many aspects of the diagnostic decision making process. These issues range from physical image performance levels to and through the diagnostic decision of the radiologist. We will use as a model for automated approaches a program we have developed to work with computed tomography (CT) images. In an attempt to unburden the user, and in an effort to facilitate the performance of QA, we have been studying automated approaches. The ultimate utility of the system is its ability to render in a safe and efficacious manner, decisions that are accurate, sensitive, specific and which are possible within the economic constraints of modern health care delivery.

  7. SPOT4 HRVIR first in-flight image quality results

    NASA Astrophysics Data System (ADS)

    Kubik, Philippe; Breton, Eric; Meygret, Aime; Cabrieres, Bernard; Hazane, Philippe; Leger, Dominique

    1998-12-01

    The SPOT4 remote sensing satellite was successfully launched at the end of March 1998. It was designed first of all to guarantee continuity of SPOT services beyond the year 2000 but also to improve the mission. Its two cameras are now called HRVIR since a short-wave infrared (SWIR) spectral band has been added. Like their predecessor HRV cameras, they provide 20-meter multispectral and 10-meter monospectral images with a 60 km swath for nadir viewing. SPOT4's first two months of life in orbit were dedicated to the evaluation of its image quality performances. During this period of time, the CNES team used specific target programming in order to compute image correction parameters and estimate the performance, at system level, of the image processing chain. After a description of SPOT4 system requirements and new features of the HRVIR cameras, this paper focuses on the performance deduced from in-flight measurements, methods used and their accuracy: MTF measurements, refocusing, absolute calibration, signal-to-noise Ratio, location, focal plane cartography, dynamic disturbances.

  8. New strategy for image and video quality assessment

    NASA Astrophysics Data System (ADS)

    Ma, Qi; Zhang, Liming; Wang, Bin

    2010-01-01

    Image and video quality assessment (QA) is a critical issue in image and video processing applications. General full-reference (FR) QA criteria such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) do not accord well with human subjective assessment. Some QA indices that consider human visual sensitivity, such as mean structural similarity (MSSIM) with structural sensitivity, visual information fidelity (VIF) with statistical sensitivity, etc., were proposed in view of the differences between reference and distortion frames on a pixel or local level. However, they ignore the role of human visual attention (HVA). Recently, some new strategies with HVA have been proposed, but the methods extracting the visual attention are too complex for real-time realization. We take advantage of the phase spectrum of quaternion Fourier transform (PQFT), a very fast algorithm we previously proposed, to extract saliency maps of color images or videos. Then we propose saliency-based methods for both image QA (IQA) and video QA (VQA) by adding weights related to saliency features to these original IQA or VQA criteria. Experimental results show that our saliency-based strategy can approach more closely to human subjective assessment compared with these original IQA or VQA methods and does not take more time because of the fast PQFT algorithm.

  9. An automated system for numerically rating document image quality

    SciTech Connect

    Cannon, M.; Kelly, P.; Iyengar, S.S.; Brener, N.

    1997-04-01

    As part of the Department of Energy document declassification program, the authors have developed a numerical rating system to predict the OCR error rate that they expect to encounter when processing a particular document. The rating algorithm produces a vector containing scores for different document image attributes such as speckle and touching characters. The OCR error rate for a document is computed from a weighted sum of the elements of the corresponding quality vector. The predicted OCR error rate will be used to screen documents that would not be handled properly with existing document processing products.

  10. Virtual monochromatic imaging in dual-source dual-energy CT: Radiation dose and image quality

    SciTech Connect

    Yu Lifeng; Christner, Jodie A.; Leng Shuai; Wang Jia; Fletcher, Joel G.; McCollough, Cynthia H.

    2011-12-15

    Purpose: To evaluate the image quality of virtual monochromatic images synthesized from dual-source dual-energy computed tomography (CT) in comparison with conventional polychromatic single-energy CT for the same radiation dose. Methods: In dual-energy CT, besides the material-specific information, one may also synthesize monochromatic images at different energies, which can be used for routine diagnosis similar to conventional polychromatic single-energy images. In this work, the authors assessed whether virtual monochromatic images generated from dual-source CT scanners had an image quality similar to that of polychromatic single-energy images for the same radiation dose. First, the authors provided a theoretical analysis of the optimal monochromatic energy for either the minimum noise level or the highest iodine contrast to noise ratio (CNR) for a given patient size and dose partitioning between the low- and high-energy scans. Second, the authors performed an experimental study on a dual-source CT scanner to evaluate the noise and iodine CNR in monochromatic images. A thoracic phantom with three sizes of attenuating rings was used to represent four adult sizes. For each phantom size, three dose partitionings between the low-energy (80 kV) and the high-energy (140 kV) scans were used in the dual-energy scan. Monochromatic images at eight energies (40 to 110 keV) were generated for each scan. Phantoms were also scanned at each of the four polychromatic single energy (80, 100, 120, and 140 kV) with the same radiation dose. Results: The optimal virtual monochromatic energy depends on several factors: phantom size, partitioning of the radiation dose between low- and high-energy scans, and the image quality metrics to be optimized. With the increase of phantom size, the optimal monochromatic energy increased. With the increased percentage of radiation dose on the low energy scan, the optimal monochromatic energy decreased. When maximizing the iodine CNR in

  11. Image quality degradation and retrieval errors introduced by registration and interpolation of multispectral digital images

    SciTech Connect

    Henderson, B.G.; Borel, C.C.; Theiler, J.P.; Smith, B.W.

    1996-04-01

    Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the Modulation Transfer Function (MTF) and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.

  12. Influence of slice overlap on positron emission tomography image quality

    NASA Astrophysics Data System (ADS)

    McKeown, Clare; Gillen, Gerry; Dempsey, Mary Frances; Findlay, Caroline

    2016-02-01

    PET scans use overlapping acquisition beds to correct for reduced sensitivity at bed edges. The optimum overlap size for the General Electric (GE) Discovery 690 has not been established. This study assesses how image quality is affected by slice overlap. Efficacy of 23% overlaps (recommended by GE) and 49% overlaps (maximum possible overlap) were specifically assessed. European Association of Nuclear Medicine (EANM) guidelines for calculating minimum injected activities based on overlap size were also reviewed. A uniform flood phantom was used to assess noise (coefficient of variation, (COV)) and voxel accuracy (activity concentrations, Bq ml-1). A NEMA (National Electrical Manufacturers Association) body phantom with hot/cold spheres in a background activity was used to assess contrast recovery coefficients (CRCs) and signal to noise ratios (SNR). Different overlap sizes and sphere-to-background ratios were assessed. COVs for 49% and 23% overlaps were 9% and 13% respectively. This increased noise was difficult to visualise on the 23% overlap images. Mean voxel activity concentrations were not affected by overlap size. No clinically significant differences in CRCs were observed. However, visibility and SNR of small, low contrast spheres (⩽13 mm diameter, 2:1 sphere to background ratio) may be affected by overlap size in low count studies if they are located in the overlap area. There was minimal detectable influence on image quality in terms of noise, mean activity concentrations or mean CRCs when comparing 23% overlap with 49% overlap. Detectability of small, low contrast lesions may be affected in low count studies—however, this is a worst-case scenario. The marginal benefits of increasing overlap from 23% to 49% are likely to be offset by increased patient scan times. A 23% overlap is therefore appropriate for clinical use. An amendment to EANM guidelines for calculating injected activities is also proposed which better reflects the effect overlap size has

  13. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  14. Digital processing to improve image quality in real-time neutron radiography

    NASA Astrophysics Data System (ADS)

    Fujine, Shigenori; Yoneda, Kenji; Kanda, Keiji

    1985-01-01

    Real-time neutron radiography (NTV) has been used for practical applications at the Kyoto University Reactor (KUR). At present, however, the direct image from the TV system is still poor in resolution and low in contrast. In this paper several image improvements are demonstrated, such as a frame summing technique, which are effective in increasing image quality in neutron radiography. Image integration before the A/D converter has a beneficial effect on image quality and the high quality image reveals details invisible in direct images, such as: small holes by a reversed image, defects in a neutron converter screen through a high quality image, a moving object in a contoured image, a slight difference between two low-contrast images by a subtraction technique, and so on. For the real-time application a contouring operation and an averaging approach can also be utilized effectively.

  15. Live births achieved via IVF are increased by improvements in air quality and laboratory environment.

    PubMed

    Heitmann, Ryan J; Hill, Micah J; James, Aidita N; Schimmel, Tim; Segars, James H; Csokmay, John M; Cohen, Jacques; Payson, Mark D

    2015-09-01

    Infertility is a common disease, which causes many couples to seek treatment with assisted reproduction techniques. Many factors contribute to successful assisted reproduction technique outcomes. One important factor is laboratory environment and air quality. Our facility had the unique opportunity to compare consecutively used, but separate assisted reproduction technique laboratories, as a result of a required move. Environmental conditions were improved by strategic engineering designs. All other aspects of the IVF laboratory, including equipment, physicians, embryologists, nursing staff and protocols, were kept constant between facilities. Air quality testing showed improved air quality at the new IVF site. Embryo implantation (32.4% versus 24.3%; P < 0.01) and live birth (39.3% versus 31.8%, P < 0.05) were significantly increased in the new facility compared with the old facility. More patients met clinical criteria and underwent mandatory single embryo transfer on day 5 leading to both a reduction in multiple gestation pregnancies and increased numbers of vitrified embryos per patient with supernumerary embryos available. Improvements in IVF laboratory conditions and air quality had profound positive effects on laboratory measures and patient outcomes. This study further strengthens the importance of the laboratory environment and air quality in the success of an IVF programme.

  16. Live births achieved via IVF are increased by improvements in air quality and laboratory environment

    PubMed Central

    Heitmann, Ryan J; Hill, Micah J; James, Aidita N; Schimmel, Tim; Segars, James H; Csokmay, John M; Cohen, Jacques; Payson, Mark D

    2016-01-01

    Infertility is a common disease, which causes many couples to seek treatment with assisted reproduction techniques. Many factors contribute to successful assisted reproduction technique outcomes. One important factor is laboratory environment and air quality. Our facility had the unique opportunity to compare consecutively used, but separate assisted reproduction technique laboratories, as a result of a required move. Environmental conditions were improved by strategic engineering designs. All other aspects of the IVF laboratory, including equipment, physicians, embryologists, nursing staff and protocols, were kept constant between facilities. Air quality testing showed improved air quality at the new IVF site. Embryo implantation (32.4% versus 24.3%; P < 0.01) and live birth (39.3% versus 31.8%, P < 0.05) were significantly increased in the new facility compared with the old facility. More patients met clinical criteria and underwent mandatory single embryo transfer on day 5 leading to both a reduction in multiple gestation pregnancies and increased numbers of vitrified embryos per patient with supernumerary embryos available. Improvements in IVF laboratory conditions and air quality had profound positive effects on laboratory measures and patient outcomes. This study further strengthens the importance of the laboratory environment and air quality in the success of an IVF programme. PMID:26194882

  17. Quality of education predicts performance on the Wide Range Achievement Test-4th Edition Word Reading subtest.

    PubMed

    Sayegh, Philip; Arentoft, Alyssa; Thaler, Nicholas S; Dean, Andy C; Thames, April D

    2014-12-01

    The current study examined whether self-rated education quality predicts Wide Range Achievement Test-4th Edition (WRAT-4) Word Reading subtest and neurocognitive performance, and aimed to establish this subtest's construct validity as an educational quality measure. In a community-based adult sample (N = 106), we tested whether education quality both increased the prediction of Word Reading scores beyond demographic variables and predicted global neurocognitive functioning after adjusting for WRAT-4. As expected, race/ethnicity and education predicted WRAT-4 reading performance. Hierarchical regression revealed that when including education quality, the amount of WRAT-4's explained variance increased significantly, with race/ethnicity and both education quality and years as significant predictors. Finally, WRAT-4 scores, but not education quality, predicted neurocognitive performance. Results support WRAT-4 Word Reading as a valid proxy measure for education quality and a key predictor of neurocognitive performance. Future research should examine these findings in larger, more diverse samples to determine their robust nature.

  18. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min-1 with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels-1.

  19. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    SciTech Connect

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  20. Quality Enhancement and Nerve Fibre Layer Artefacts Removal in Retina Fundus Images by Off Axis Imaging

    SciTech Connect

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul; Li, Yaquin; Tobin Jr, Kenneth William; Chaum, Edward

    2011-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relative low cost, these cameras are employed worldwide by retina specialists to diagnose diabetic retinopathy and other degenerative diseases. Even with relative ease of use, the images produced by these systems sometimes suffer from reflectance artefacts mainly due to the nerve fibre layer (NFL) or other camera lens related reflections. We propose a technique that employs multiple fundus images acquired from the same patient to obtain a single higher quality image without these reflectance artefacts. The removal of bright artefacts, and particularly of NFL reflectance, can have great benefits for the reduction of false positives in the detection of retinal lesions such as exudate, drusens and cotton wool spots by automatic systems or manual inspection. If enough redundant information is provided by the multiple images, this technique also compensates for a suboptimal illumination. The fundus images are acquired in straightforward but unorthodox manner, i.e. the stare point of the patient is changed between each shot but the camera is kept fixed. Between each shot, the apparent shape and position of all the retinal structures that do not exhibit isotropic reflectance (e.g. bright artefacts) change. This physical effect is exploited by our algorithm in order to extract the pixels belonging to the inner layers of the retina, hence obtaining a single artefacts-free image.

  1. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  2. Human vision model for the objective evaluation of perceived image quality applied to MRI and image restoration

    NASA Astrophysics Data System (ADS)

    Salem, Kyle A.; Wilson, David L.

    2002-12-01

    We are developing a method to objectively quantify image quality and applying it to the optimization of interventional magnetic resonance imaging (iMRI). In iMRI, images are used for live-time guidance of interventional procedures such as the minimally invasive treatment of cancer. Hence, not only does one desire high quality images, but they must also be acquired quickly. In iMRI, images are acquired in the Fourier domain, or k-space, and this allows many creative ways to image quickly such as keyhole imaging where k-space is preferentially subsampled, yielding suboptimal images at very high frame rates. Other techniques include spiral, radial, and the combined acquisition technique. We have built a perceptual difference model (PDM) that incorporates various components of the human visual system. The PDM was validated using subjective image quality ratings by naive observers and task-based measures defined by interventional radiologists. Using the PDM, we investigated the effects of various imaging parameters on image quality and quantified the degradation due to novel imaging techniques. Results have provided significant information about imaging time versus quality tradeoffs aiding the MR sequence engineer. The PDM has also been used to evaluate other applications such as Dixon fat suppressed MRI and image restoration. In image restoration, the PDM has been used to evaluate the Generalized Minimal Residual (GMRES) image restoration method and to examine the ability to appropriately determine a stopping condition for such iterative methods. The PDM has been shown to be an objective tool for measuring image quality and can be used to determine the optimal methodology for various imaging applications.

  3. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  4. Goal Setting in Principal Evaluation: Goal Quality and Predictors of Achievement

    ERIC Educational Resources Information Center

    Sinnema, Claire E. L.; Robinson, Viviane M. J.

    2012-01-01

    This article draws on goal-setting theory to investigate the goals set by experienced principals during their performance evaluations. While most goals were about teaching and learning, they tended to be vaguely expressed and only partially achieved. Five predictors (commitment, challenge, learning, effort, and support) explained a significant…

  5. Achieving Quality Education in Ghana: The Spotlight on Primary Education within the Kumasi Metropolis

    ERIC Educational Resources Information Center

    Boakye-Amponsah, Abraham; Enninful, Ebenezer Kofi; Anin, Emmanuel Kwabena; Vanderpuye, Patience

    2015-01-01

    Background: Ghana being a member of the United Nations, committed to the Universal Primary Education initiative in 2000 and has since implemented series of educational reforms to meet the target for the Millennium Development Goal (MDG) 2. Despite the numerous government interventions to achieve the MDG 2, many children in Ghana have been denied…

  6. School Improvement Plans and Student Achievement: Preliminary Evidence from the Quality and Merit Project in Italy

    ERIC Educational Resources Information Center

    Caputo, Andrea; Rastelli, Valentina

    2014-01-01

    This study provides preliminary evidence from an Italian in-service training program addressed to lower secondary school teachers which supports school improvement plans (SIPs). It aims at exploring the association between characteristics/contents of SIPs and student improvement in math achievement. Pre-post standardized tests and text analysis of…

  7. Leveraging Quality Improvement to Achieve Student Learning Assessment Success in Higher Education

    ERIC Educational Resources Information Center

    Glenn, Nancy Gentry

    2009-01-01

    Mounting pressure for transformational change in higher education driven by technology, globalization, competition, funding shortages, and increased emphasis on accountability necessitates that universities implement reforms to demonstrate responsiveness to all stakeholders and to provide evidence of student achievement. In the face of the demand…

  8. Educational Inequality in Colombia: Family Background, School Quality and Student Achievement in Cartagena

    ERIC Educational Resources Information Center

    Rangel, Claudia; Lleras, Christy

    2010-01-01

    This study examines the effects of family socio-economic disadvantage and differences in school resources on student achievement in the city of Cartagena, Colombia. Using data from the ICFES and C-600 national databases, we conduct a multilevel analysis to determine the unique contribution of school-level factors above and beyond family…

  9. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    SciTech Connect

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A.

    2014-03-15

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  10. No-reference remote sensing image quality assessment using a comprehensive evaluation factor

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Wang, Xu; Li, Xiao; Shao, Xiaopeng

    2014-05-01

    The conventional image quality assessment algorithm, such as Peak Signal to Noise Ratio (PSNR), Mean Square Error(MSE) and structural similarity (SSIM), needs the original image as a reference. It's not applicable to the remote sensing image for which the original image cannot be assumed to be available. In this paper, a No-reference Image Quality Assessment (NRIQA) algorithm is presented to evaluate the quality of remote sensing image. Since blur and noise (including the stripe noise) are the common distortion factors affecting remote sensing image quality, a comprehensive evaluation factor is modeled to assess the blur and noise by analyzing the image visual properties for different incentives combined with SSIM based on human visual system (HVS), and also to assess the stripe noise by using Phase Congruency (PC). The experiment results show this algorithm is an accurate and reliable method for Remote Sensing Image Quality Assessment.

  11. A uniform geostationary visible calibration approach to achieve a climate quality dataset

    NASA Astrophysics Data System (ADS)

    Haney, C.; Doelling, D.; Bhatt, R.; Scarino, B. R.; Gopalan, A.

    2013-12-01

    The geostationary (GEO) weather satellite visible and IR image record has surpassed 30 years. They have been preserved in the ISCCP-B1U 3-hourly dataset and other archives such as McIDAS, EUMETSAT, and NOAA CLASS. Since they were designed to aid in weather forecasting, long-term calibration stability was not a high priority. All GEO imagers lack onboard visible calibration and suffer from optical degradation after they are launched. In order to piece together the 35+ GEO satellite record both in time and space, a uniform calibration approach is desired to remove individual GEO temporal trends, as well as GEO spectral band differences. Otherwise, any artificial discontinuities caused by sequential GEO satellite records or spurious temporal trends caused by optical degradation may be interpreted as a change in climate. The approach relies on multiple independent methods to reduce the overall uncertainty of the GEO calibration coefficients. Consistency among methods validates the approach. During the MODIS record (2000 to the present) the GEO satellites are inter-calibrated against MODIS using ray-matched or bore-sighted radiance pairs. The MODIS and the VIIRS follow on instruments are equipped with onboard calibration thereby providing a stable calibration reference. The GEO spectral band differences are accounted for using a Spectral Band Adjustment Factor (SBAF) based on hyper-spectral SCIAMACHY data. During the pre-MODIS era, invariant earth targets of deserts and deep convective clouds (DCC) are used. Since GEO imagers have maintained their imaging scan schedules, GEO desert and DCC bidirectional reflectance distribution functions (BRDF) can be constructed and validated during the MODIS era. The BRDF models can then be applied to historical GEO imagers. Consistency among desert and DCC GEO calibration gains validates the approach. This approach has been applied to the GEO record beginning in 1985 and the results will be presented at the meeting.

  12. Understanding and Achieving Quality in Sure Start Children's Centres: Practitioners' Perspectives

    ERIC Educational Resources Information Center

    Cottle, Michelle

    2011-01-01

    This article focuses on some of the issues that shape understandings of professional practice in the rapidly expanding context of children's centres in England. Drawing on data from an ESRC-funded project exploring practitioners' understandings of quality and success, the perspectives of 115 practitioners working in 11 Sure Start Children's…

  13. A Guide to the Librarian's Responsibility in Achieving Quality in Lighting and Ventilation.

    ERIC Educational Resources Information Center

    Mason, Ellsworth

    1967-01-01

    Quality, not intensity, is the keystone to good library lighting. The single most important problem in lighting is glare caused by extremely intense centers of light. Multiple interfiling of light rays is a factor required in library lighting. A fixture that diffuses light well is basic when light emerges from the fixture. It scatters widely,…

  14. How to Achieve High-Quality Oocytes? The Key Role of Myo-Inositol and Melatonin

    PubMed Central

    Rossetti, Paola; Corrado, Francesco; Rapisarda, Agnese Maria Chiara; Condorelli, Rosita Angela; Valenti, Gaetano; Sapia, Fabrizio; Buscema, Massimo

    2016-01-01

    Assisted reproductive technologies (ART) have experienced growing interest from infertile patients seeking to become pregnant. The quality of oocytes plays a pivotal role in determining ART outcomes. Although many authors have studied how supplementation therapy may affect this important parameter for both in vivo and in vitro models, data are not yet robust enough to support firm conclusions. Regarding this last point, in this review our objective has been to evaluate the state of the art regarding supplementation with melatonin and myo-inositol in order to improve oocyte quality during ART. On the one hand, the antioxidant effect of melatonin is well known as being useful during ovulation and oocyte incubation, two occasions with a high level of oxidative stress. On the other hand, myo-inositol is important in cellular structure and in cellular signaling pathways. Our analysis suggests that the use of these two molecules may significantly improve the quality of oocytes and the quality of embryos: melatonin seems to raise the fertilization rate, and myo-inositol improves the pregnancy rate, although all published studies do not fully agree with these conclusions. However, previous studies have demonstrated that cotreatment improves these results compared with melatonin alone or myo-inositol alone. We recommend that further studies be performed in order to confirm these positive outcomes in routine ART treatment.

  15. Forward-Oriented Designing for Learning as a Means to Achieve Educational Quality

    ERIC Educational Resources Information Center

    Ghislandi, Patrizia M. M.; Raffaghelli, Juliana E.

    2015-01-01

    In this paper, we reflect on how Design for Learning can create the basis for a culture of educational quality. We explore the process of Design for Learning within a blended, undergraduate university course through a teacher-led inquiry approach, aiming at showing the connections between the process of Design for Learning and academic…

  16. Achieving Quality Assurance and Moving to a World Class University in the 21st Century

    ERIC Educational Resources Information Center

    Lee, Lung-Sheng Steven

    2013-01-01

    Globalization in the 21st century has brought innumerable challenges and opportunities to universities and countries. Universities are primarily concerned with how to ensure the quality of their education and how to boost their local and global competitiveness. The pressure from both international competition and public accountability on…

  17. Teacher-Student Relationship Quality Type in Elementary Grades: Effects on Trajectories for Achievement and Engagement

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Hughes, Jan N.; Kwok, Oi-Man

    2010-01-01

    Teacher, peer, and student reports of the quality of the teacher-student relationship were obtained for an ethnically diverse and academically at-risk sample of 706 second- and third-grade students. Cluster analysis identified four types of relationships based on the consistency of child reports of support and conflict in the relationship with…

  18. The Walls Speak: The Interplay of Quality Facilities, School Climate, and Student Achievement

    ERIC Educational Resources Information Center

    Uline, Cynthia; Tschannen-Moran, Megan

    2008-01-01

    Purpose: A growing body of research connecting the quality of school facilities to student performance accompanies recent efforts to improve the state of the educational infrastructure in the USA. Less is known about the mechanisms of these relationships. This paper seeks to examine the proposition that part of the explanation may be the mediating…

  19. Quality, peer review, and the achievement of consensus in probabilistic risk analysis

    SciTech Connect

    Apostolakis, G.; Garrick, B.J.; Okrent, D.

    1983-01-01

    This article addresses some of the issues that arise in connection with the problems associated with probabilistic risk assessment (PRA). Some opinions are given on quality assurance, PRA scope, and peer review. Then the issue of consensus and some of the reasons that lead to disagreement are discussed.

  20. How to Achieve High-Quality Oocytes? The Key Role of Myo-Inositol and Melatonin.

    PubMed

    Vitale, Salvatore Giovanni; Rossetti, Paola; Corrado, Francesco; Rapisarda, Agnese Maria Chiara; La Vignera, Sandro; Condorelli, Rosita Angela; Valenti, Gaetano; Sapia, Fabrizio; Laganà, Antonio Simone; Buscema, Massimo

    2016-01-01

    Assisted reproductive technologies (ART) have experienced growing interest from infertile patients seeking to become pregnant. The quality of oocytes plays a pivotal role in determining ART outcomes. Although many authors have studied how supplementation therapy may affect this important parameter for both in vivo and in vitro models, data are not yet robust enough to support firm conclusions. Regarding this last point, in this review our objective has been to evaluate the state of the art regarding supplementation with melatonin and myo-inositol in order to improve oocyte quality during ART. On the one hand, the antioxidant effect of melatonin is well known as being useful during ovulation and oocyte incubation, two occasions with a high level of oxidative stress. On the other hand, myo-inositol is important in cellular structure and in cellular signaling pathways. Our analysis suggests that the use of these two molecules may significantly improve the quality of oocytes and the quality of embryos: melatonin seems to raise the fertilization rate, and myo-inositol improves the pregnancy rate, although all published studies do not fully agree with these conclusions. However, previous studies have demonstrated that cotreatment improves these results compared with melatonin alone or myo-inositol alone. We recommend that further studies be performed in order to confirm these positive outcomes in routine ART treatment. PMID:27651794

  1. How to Achieve High-Quality Oocytes? The Key Role of Myo-Inositol and Melatonin

    PubMed Central

    Rossetti, Paola; Corrado, Francesco; Rapisarda, Agnese Maria Chiara; Condorelli, Rosita Angela; Valenti, Gaetano; Sapia, Fabrizio; Buscema, Massimo

    2016-01-01

    Assisted reproductive technologies (ART) have experienced growing interest from infertile patients seeking to become pregnant. The quality of oocytes plays a pivotal role in determining ART outcomes. Although many authors have studied how supplementation therapy may affect this important parameter for both in vivo and in vitro models, data are not yet robust enough to support firm conclusions. Regarding this last point, in this review our objective has been to evaluate the state of the art regarding supplementation with melatonin and myo-inositol in order to improve oocyte quality during ART. On the one hand, the antioxidant effect of melatonin is well known as being useful during ovulation and oocyte incubation, two occasions with a high level of oxidative stress. On the other hand, myo-inositol is important in cellular structure and in cellular signaling pathways. Our analysis suggests that the use of these two molecules may significantly improve the quality of oocytes and the quality of embryos: melatonin seems to raise the fertilization rate, and myo-inositol improves the pregnancy rate, although all published studies do not fully agree with these conclusions. However, previous studies have demonstrated that cotreatment improves these results compared with melatonin alone or myo-inositol alone. We recommend that further studies be performed in order to confirm these positive outcomes in routine ART treatment. PMID:27651794

  2. An Analysis of Teacher Quality, Evaluation, Professional Development and Tenure as It Relates to Student Achievement

    ERIC Educational Resources Information Center

    Pritchett, Jim; Sparks, Tara J.; Taylor-Johnson, Tiffany

    2010-01-01

    Educational leaders nationwide have been challenged by ineffective tenured teachers who continue to impact student learning. In the era of No Child Left Behind, educational leaders struggle to ensure that students receive high quality education. This problem-based research project attempted to support state, district and building educational…

  3. Quality of Achieved Employment Among Rural Youth Who Complete Junior College Associate Degree Programs.

    ERIC Educational Resources Information Center

    Wakefield, Nancy C.; Dunkelberger, John E.

    Beginning in 1966, the relationship between levels of formal education, specifically the attainment of an associate degree from a two-year college, and quality of employment among young adults reared in rural areas, was examined in a multi-phase, longitudinal sampling procedure which obtained data from a pool of high school sophomores from…

  4. Prediction of water quality parameters from SAR images by using multivariate and texture analysis models

    NASA Astrophysics Data System (ADS)

    Shareef, Muntadher A.; Toumi, Abdelmalek; Khenchaf, Ali

    2014-10-01

    Remote sensing is one of the most important tools for monitoring and assisting to estimate and predict Water Quality parameters (WQPs). The traditional methods used for monitoring pollutants are generally relied on optical images. In this paper, we present a new approach based on the Synthetic Aperture Radar (SAR) images which we used to map the region of interest and to estimate the WQPs. To achieve this estimation quality, the texture analysis is exploited to improve the regression models. These models are established and developed to estimate six common concerned water quality parameters from texture parameters extracted from Terra SAR-X data. In this purpose, the Gray Level Cooccurrence Matrix (GLCM) is used to estimate several regression models using six texture parameters such as contrast, correlation, energy, homogeneity, entropy and variance. For each predicted model, an accuracy value is computed from the probability value given by the regression analysis model of each parameter. In order to validate our approach, we have used tow dataset of water region for training and test process. To evaluate and validate the proposed model, we applied it on the training set. In the last stage, we used the fuzzy K-means clustering to generalize the water quality estimation on the whole of water region extracted from segmented Terra SAR-X image. Also, the obtained results showed that there are a good statistical correlation between the in situ water quality and Terra SAR-X data, and also demonstrated that the characteristics obtained by texture analysis are able to monitor and predicate the distribution of WQPs in large rivers with high accuracy.

  5. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality. PMID:26478956

  6. Comparison of no-reference image quality assessment machine learning-based algorithms on compressed images

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Saadane, AbdelHakim; Fernandez-Maloigne, Christine

    2015-01-01

    No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications. The main goal of this paper is to perform a comparative study of seven well known no-reference learning-based image quality algorithms. To test the performance of these algorithms, three public databases are used. As a first step, the trial algorithms are compared when no new learning is performed. The second step investigates how the training set influences the results. The Spearman Rank Ordered Correlation Coefficient (SROCC) is utilized to measure and compare the performance. In addition, an hypothesis test is conducted to evaluate the statistical significance of performance of each tested algorithm.

  7. Concepts for evaluation of image quality in digital radiology

    NASA Astrophysics Data System (ADS)

    Zscherpel, U.; Ewert, U.; Jechow, M.

    2012-05-01

    Concepts for digital image evaluation are presented for Computed Radiography (CR) and Digital Detector Arrays (DDAs) used for weld inspection. The precise DDA calibration yields an extra ordinary increase of contrast sensitivity up to 10 times in relation to film radiography. Restrictions in spatial resolution caused by pixel size of the DDA are compensated by increased contrast sensitivity. First CR standards were published in 2005 to support the application of phosphor imaging plates in lieu of X-ray film, but they need already a revision based on experiences reported by many users. One of the key concepts is the usage of signal-to-noise (SNR) measurements as equivalent to the optical density of film and film system class. The contrast sensitivity, measured by IQI visibility, depends on three essential parameters: The basic spatial resolution (SRb) of the radiographic image, the achieved signal-to-noise ratio (SNR) and the specific contrast (μeff - effective attenuation coefficient). Knowing these 3 parameters for the given exposure condition, inspected material and monitor viewing condition permits the calculation of the just visible IQI element. Furthermore, this enables the optimization of exposure conditions. The new ISO/FDIS 17636-2 describes the practice for digital radiography with CR and DDAs. It considers the first time compensation principles, derived from the three essential parameters. The consequences are described.

  8. Evaluation of scatter effects on image quality for breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Wu, Gang; Mainprize, James G.; Boone, John M.; Yaffe, Martin J.

    2007-03-01

    Digital breast tomosynthesis uses a limited number of low-dose x-ray projections to produce a three-dimensional (3D) tomographic reconstruction of the breast. The purpose of this investigation was to characterize and evaluate the effect of scatter radiation on image quality for breast tomosynthesis. Generated by a Monte Carlo simulation method, scatter point spread functions (PSF) were convolved over the field of view (FOV) to estimate the distribution of scatter for each angle of tomosynthesis projection. The results demonstrated that in the absence of scatter reduction techniques, the scatter-to-primary ratio (SPR) levels for the average breast are quite high (~0.4 at the centre of mass), and increased with increased breast thickness and with larger FOV. Associated with such levels of x-ray scatter are cupping artifacts, as well as reduced accuracy in reconstruction values. The effect of x-ray scatter on the contrast, noise, and signal-difference-to-noise ratio (SDNR) in tomosynthesis reconstruction was measured as a function of tumour size. For example, the contrast in the reconstructed central slice of a tumour-like mass (14 mm in diameter) was degraded by 30% while the inaccuracy of the voxel value was 28%, and the reduction of SDNR was 60%. We have quantified the degree to which scatter degrades the image quality over a wide range of parameters, including x-ray beam energy, breast thickness, breast diameter, and breast composition. However, even without a scatter rejection device, the contrast and SDNR in the reconstructed tomosynthesis slice is higher than that of conventional mammographic projection images acquired with a grid at an equivalent total exposure.

  9. Digital Image Processing Applied To Quality Assurance In Mineral Industry

    NASA Astrophysics Data System (ADS)

    Hamrouni, Zouheir; Ayache, Alain; Krey, Charlie J.

    1989-03-01

    In this paper , we bring forward an application of vision in the domain of quality assurance in mineral industry of talc. By using image processing and computer vision means, the proposed real time whiteness captor system intends: - to inspect the whiteness of grinded product, - to manage the mixing of primary talcs before grinding, in order to obtain a final product with predetermined whiteness. The system uses the robotic CCD microcamera MICAM (designed by our laboratory and presently manufactured), a micro computer system based on Motorola 68020 and real time image processing boards. It has the industrial following specifications: - High reliability - Whiteness is determined with a 0.3% precision on a scale of 25 levels. Because of the expected precision, we had to study carefully the lighting system, the type of image captor and associated electronics. The first developped softwares are able to process the withness of talcum powder; then we have conceived original algorithms to control withness of rough talc taking into account texture and shadows. The processing times of these algorithms are completely compatible with industrial rates. This system can be applied to other domains where high precision reflectance captor is needed: industry of paper, paints, ...

  10. Damage and quality assessment in wheat by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Delwiche, Stephen R.; Kim, Moon S.; Dong, Yanhong

    2010-04-01

    Fusarium head blight is a fungal disease that affects the world's small grains, such as wheat and barley. Attacking the spikelets during development, the fungus causes a reduction of yield and grain of poorer processing quality. It also is a health concern because of the secondary metabolite, deoxynivalenol, which often accompanies the fungus. While chemical methods exist to measure the concentration of the mycotoxin and manual visual inspection is used to ascertain the level of Fusarium damage, research has been active in developing fast, optically based techniques that can assess this form of damage. In the current study a near-infrared (1000-1700 nm) hyperspectral image system was assembled and applied to Fusarium-damaged kernel recognition. With anticipation of an eventual multispectral imaging system design, 5 wavelengths were manually selected from a pool of 146 images as the most promising, such that when combined in pairs or triplets, Fusarium damage could be identified. We present the results of two pairs of wavelengths [(1199, 1474 nm) and (1315, 1474 nm)] whose reflectance values produced adequate separation of kernels of healthy appearance (i.e., asymptomatic condition) from kernels possessing Fusarium damage.

  11. Amplitude dependence of image quality in atomically-resolved bimodal atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Ooe, Hiroaki; Kirpal, Dominik; Wastl, Daniel S.; Weymouth, Alfred J.; Arai, Toyoko; Giessibl, Franz J.

    2016-10-01

    In bimodal frequency modulation atomic force microscopy (FM-AFM), two flexural modes are excited simultaneously. We show atomically resolved images of KBr(100) in ambient conditions in both modes that display a strong correlation between the image quality and amplitude. We define the sum amplitude as the sum of the amplitudes of both modes. When the sum amplitude becomes larger than about 100 pm, the signal-to-noise ratio (SNR) drastically decreases. We propose that this is caused by the temporary presence of one or more water layers in the tip-sample gap. These water layers screen the short range interaction and must be displaced with each oscillation cycle. Decreasing the amplitude of either mode, however, increases the noise. Therefore, the highest SNR in ambient conditions is achieved when twice the sum amplitude is slightly less than the thickness of the primary hydration layer.

  12. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes

    PubMed Central

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-01-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement. PMID:26178332

  13. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes.

    PubMed

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-10-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement.

  14. Beyond image quality: designing engaging interactions with digital products

    NASA Astrophysics Data System (ADS)

    de Ridder, Huib; Rozendaal, Marco C.

    2008-02-01

    Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed visually, but other criteria such as enjoyment, fun, engagement and hedonic quality are emerging. This paper deals with engagement, the intrinsically enjoyable readiness to put more effort into exploring and/or using a product than strictly required, thus attracting and keeping user's attention for a longer period of time. The impact of the experienced richness of an interface, both visually and degree of possible manipulations, was investigated in a series of experiments employing game-like user interfaces. This resulted in the extension of an existing conceptual framework relating engagement to richness by means of two intermediating variables, namely experienced challenge and sense of control. Predictions from this revised framework are evaluated against results of an earlier experiment assessing the ergonomic and hedonic qualities of interactive media. Test material consisted of interactive CD-ROM's containing presentations of three companies for future customers.

  15. Broadcast quality 3840 × 2160 color imager operating at 30 frames/s

    NASA Astrophysics Data System (ADS)

    Iodice, Robert M.; Joyner, Michael; Hong, Canaan S.; Parker, David P.

    2003-05-01

    Both the active column sensor (ACS) pixel sensing technology and the PVS-Bus multiplexer technology have been applied to a color imaging array to produce an extraordinarily high resolution, color imager of greater than 8 million pixels with image quality and speed suitable for a broad range of applications including digital cinema, broadcast video and security/surveillance. The imager has been realized in a standard 0.5 μm CMOS technology using double-poly and triple metal (DP3M) construction and features a pixel size of 7.5 μm by 7.5 μm. Mask level stitching enables the construction of a high quality, low dark current imager having an array size of 16.2 mm by 28.8 mm. The image array aspect ratio is 16:9 with a diagonal of 33 mm making it suitable for HDTV applications using optics designed for 35 mm still photography. A high modulation transfer function (MTF) is maintained by utilizing micro lenses along with an RGB Bayer pattern color filter array. The frame rate of 30 frames/s in progressive mode is achieved using the PVS-Bus technology with eight output ports, which corresponds to an overall pixel rate of 248 M-pixel per second. High dynamic range and low fixed pattern noise are achieved by combining photodiode pixels with the ACS pixel sensing technology and a modified correlated double-sampling (CDS) technique. Exposure time can be programmed by the user from a full frame of integration to as low as a single line of integration in steps of 14.8 μs. The output gain is programmable from 0dB to +12dB in 256 steps; the output offset is also programmable over a range of 765 mV in 256 steps. This QuadHDTV imager has been delivered to customers and has been demonstrated in a prototype camera that provides full resolution video with all image processing on board. The prototype camera operates at 2160p24, 2160p30 and 2160i60.

  16. Task-based measures of image quality and their relation to radiation dose and patient risk

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  17. SENTINEL-2 image quality and level 1 processing

    NASA Astrophysics Data System (ADS)

    Meygret, Aimé; Baillarin, Simon; Gascon, Ferran; Hillairet, Emmanuel; Dechoz, Cécile; Lacherade, Sophie; Martimort, Philippe; Spoto, François; Henry, Patrice; Duca, Riccardo

    2009-08-01

    In the framework of the Global Monitoring for Environment and Security (GMES) programme, the European Space Agency (ESA) in partnership with the European Commission (EC) is developing the SENTINEL-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a twin satellites configuration deployed in polar sun-synchronous orbit and is designed to offer a unique combination of systematic global coverage with a wide field of view (290km), a high revisit (5 days at equator with two satellites), a high spatial resolution (10m, 20m and 60 m) and multi-spectral imagery (13 bands in the visible and the short wave infrared spectrum). SENTINEL-2 will ensure data continuity of SPOT and LANDSAT multispectral sensors while accounting for future service evolution. This paper presents the main geometric and radiometric image quality requirements for the mission. The strong multi-spectral and multi-temporal registration requirements constrain the stability of the platform and the ground processing which will automatically refine the geometric physical model through correlation technics. The geolocation of the images will take benefits from a worldwide reference data set made of SENTINEL-2 data strips geolocated through a global space-triangulation. These processing are detailed through the description of the level 1C production which will provide users with ortho-images of Top of Atmosphere reflectances. The huge amount of data (1.4 Tbits per orbit) is also a challenge for the ground processing which will produce at level 1C all the acquired data. Finally we discuss the different geometric (line of sight, focal plane cartography, ...) and radiometric (relative and absolute camera sensitivity) in-flight calibration methods that will take advantage of the on-board sun diffuser and ground targets to answer the severe mission requirements.

  18. Potential of organic filter materials for treating greywater to achieve irrigation quality: a review.

    PubMed

    Dalahmeh, Sahar S; Hylander, Lars D; Vinnerås, Björn; Pell, Mikael; Oborn, Ingrid; Jönsson, Håkan

    2011-01-01

    The objectives of this literature review were to: (i) evaluate the impact of greywater generated in rural communities, with the emphasis on Jordanian conditions, on soil, plant and public health and assess the need for treatment of this greywater before it is used for irrigation, and (ii) assess the potential of different types of organic by-products as carrier material in different filter units for removal of pollutants from greywater. Greywater with high BOD5, COD, high concentrations of SS, fat, oil and grease and high levels of surfactants is commonly found in rural areas in Jordan. Oxygen depletion, odour emission, hydrophobic soil phenomena, plant toxicity, blockage of piping systems and microbiological health risks are common problems associated with greywater without previous treatment. Organic by-products such as wood chips, bark, peat, wheat straw and corncob may be used as carrier material in so-called mulch filters for treating wastewater and greywater from different sources. A down-flow-mode vertical filter is a common setup used in mulch filters. Wastewaters with a wide range of SS, cBOD5 and COD fed into different mulch filters have been studied. The different mulch materials achieved SS removal ranging between 51 and 91%, a BOD5 reduction range of 55-99.9%, and COD removal of 51-98%. Most types of mulches achieved a higher organic matter removal than that achieved by an ordinary septic tank. Bark, peat and wood chips filters removed organic matter better than sand and trickling filters, under similar conditions. Release of filter material and increase in COD in the effluent was reported using some mulch materials. In conclusion, some mulch materials such as bark, peat and woodchips seem to have a great potential for treatment of greywater in robust, low-tech systems. They can be expected to be resilient in dealing with variable low and high organic loads and shock loads.

  19. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    PubMed Central

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-01-01

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed. PMID:23635248

  20. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    SciTech Connect

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-05-15

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed.

  1. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases.

  2. NOTE: Development of a quality assurance protocol for peripheral subtraction imaging applications

    NASA Astrophysics Data System (ADS)

    Walsh, C.; Murphy, D.; O'Hare, N.

    2002-04-01

    Peripheral subtraction scanning is used to trace the blood vessels of upper and lower extremities. In some modern C-arm fluoroscopy systems this function is performed automatically. In this mode the system is programmed to advance and stop in a series of steps taking a mask image at each point. The system then repeats each step after the contrast agent has been injected, and produces a DSA image at each point. Current radiographic quality assurance protocols do not address this feature. This note reviews methods of measuring system vibration while images are being acquired in automated peripheral stepping. The effect on image quality pre- and post-image processing is assessed. Results show that peripheral stepping DSA does not provide the same degree of image quality as static DSA. In examining static test objects, the major cause of the reduction in image quality is misregistration due to vibration of the image intensifier during imaging.

  3. Effect of labeling density and time post labeling on quality of antibody-based super resolution microscopy images

    NASA Astrophysics Data System (ADS)

    Bittel, Amy M.; Saldivar, Isaac; Dolman, Nicholas; Nickerson, Andrew K.; Lin, Li-Jung; Nan, Xiaolin; Gibbs, Summer L.

    2015-03-01

    Super resolution microscopy (SRM) has overcome the historic spatial resolution limit of light microscopy, enabling fluorescence visualization of intracellular structures and multi-protein complexes at the nanometer scale. Using single-molecule localization microscopy, the precise location of a stochastically activated population of photoswitchable fluorophores is determined during the collection of many images to form a single image with resolution of ~10-20 nm, an order of magnitude improvement over conventional microscopy. One of the key factors in achieving such resolution with single-molecule SRM is the ability to accurately locate each fluorophore while it emits photons. Image quality is also related to appropriate labeling density of the entity of interest within the sample. While ease of detection improves as entities are labeled with more fluorophores and have increased fluorescence signal, there is potential to reduce localization precision, and hence resolution, with an increased number of fluorophores that are on at the same time in the same relative vicinity. In the current work, fixed microtubules were antibody labeled using secondary antibodies prepared with a range of Alexa Fluor 647 conjugation ratios to compare image quality of microtubules to the fluorophore labeling density. It was found that image quality changed with both the fluorophore labeling density and time between completion of labeling and performance of imaging study, with certain fluorophore to protein ratios giving optimal imaging results.

  4. The image quality of ion computed tomography at clinical imaging dose levels

    SciTech Connect

    Hansen, David C.; Bassler, Niels; Sørensen, Thomas Sangild; Seco, Joao

    2014-11-01

    Purpose: Accurately predicting the range of radiotherapy ions in vivo is important for the precise delivery of dose in particle therapy. Range uncertainty is currently the single largest contribution to the dose margins used in planning and leads to a higher dose to normal tissue. The use of ion CT has been proposed as a method to improve the range uncertainty and thereby reduce dose to normal tissue of the patient. A wide variety of ions have been proposed and studied for this purpose, but no studies evaluate the image quality obtained with different ions in a consistent manner. However, imaging doses ion CT is a concern which may limit the obtainable image quality. In addition, the imaging doses reported have not been directly comparable with x-ray CT doses due to the different biological impacts of ion radiation. The purpose of this work is to develop a robust methodology for comparing the image quality of ion CT with respect to particle therapy, taking into account different reconstruction methods and ion species. Methods: A comparison of different ions and energies was made. Ion CT projections were simulated for five different scenarios: Protons at 230 and 330 MeV, helium ions at 230 MeV/u, and carbon ions at 430 MeV/u. Maps of the water equivalent stopping power were reconstructed using a weighted least squares method. The dose was evaluated via a quality factor weighted CT dose index called the CT dose equivalent index (CTDEI). Spatial resolution was measured by the modulation transfer function. This was done by a noise-robust fit to the edge spread function. Second, the image quality as a function of the number of scanning angles was evaluated for protons at 230 MeV. In the resolution study, the CTDEI was fixed to 10 mSv, similar to a typical x-ray CT scan. Finally, scans at a range of CTDEI’s were done, to evaluate dose influence on reconstruction error. Results: All ions yielded accurate stopping power estimates, none of which were statistically

  5. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    PubMed

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-07-08

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4 iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (< 5 HU) and target CT number variations (< 1 HU). The radiation

  6. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images.

    PubMed

    Kim, Kwang-Min; Son, Kilho; Palmore, G Tayhas R

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  7. Neuron Image Analyzer: Automated and Accurate Extraction of Neuronal Data from Low Quality Images

    PubMed Central

    Kim, Kwang-Min; Son, Kilho; Palmore, G. Tayhas R.

    2015-01-01

    Image analysis software is an essential tool used in neuroscience and neural engineering to evaluate changes in neuronal structure following extracellular stimuli. Both manual and automated methods in current use are severely inadequate at detecting and quantifying changes in neuronal morphology when the images analyzed have a low signal-to-noise ratio (SNR). This inadequacy derives from the fact that these methods often include data from non-neuronal structures or artifacts by simply tracing pixels with high intensity. In this paper, we describe Neuron Image Analyzer (NIA), a novel algorithm that overcomes these inadequacies by employing Laplacian of Gaussian filter and graphical models (i.e., Hidden Markov Model, Fully Connected Chain Model) to specifically extract relational pixel information corresponding to neuronal structures (i.e., soma, neurite). As such, NIA that is based on vector representation is less likely to detect false signals (i.e., non-neuronal structures) or generate artifact signals (i.e., deformation of original structures) than current image analysis algorithms that are based on raster representation. We demonstrate that NIA enables precise quantification of neuronal processes (e.g., length and orientation of neurites) in low quality images with a significant increase in the accuracy of detecting neuronal changes post-stimulation. PMID:26593337

  8. Varying the periodicity to achieve high quality factor on asymmetrical H-Shaped resonators

    NASA Astrophysics Data System (ADS)

    Mohamad Ali Nasri, Ili F.; Mbomson, Ifeoma G.; De La Rue, Richard M.; Johnson, Nigel P.

    2016-04-01

    An asymmetrical H-shaped resonator (ASH) has been designed using gold on a fused silica substrate. The aim is to obtain a high - quality factor at the reflectance resonance peaks in the mid - infrared wavelength of 2 μm to 8 μm. The structures were modelled using the Finite Difference Time Domain (FDTD) Lumerical Solution simulation software by adjusting the parameter of periodic boundary condition on the X and Y-axis and perfectly matched layer (PML) on the Z- axis. The asymmetric structures give double resonance peaks that depend on the arm-length of the structure. The periodicity along the X and Y-axis was varied to tune the width of the resonant peaks in order to obtain the maximum Qfactor. Experimental results broadly confirm the simulations.

  9. Hygienic support of the ISS air quality (main achievements and prospects)

    NASA Astrophysics Data System (ADS)

    Moukhamedieva, Lana; Tsarkov, Dmitriy; Pakhomova, Anna

    Hygienic preventive measures during pre-flight processing of manned spaceships, selection of polymeric materials, sanitary-hygienic evaluation of cargo and scientific hardware to be used on the ISS and life support systems allow to maintain air quality in limits of regulatory requirements. However, graduate increase of total air contamination by harmful chemicals is observed as service life of the ISS gets longer. It is caused by polymeric materials used on the station overall quantity rise, by additional contamination brought by cargo spacecrafts and modules docking to the ISS and by the cargo. At the same time the range of contaminants that are typical for off-gassing from polymeric materials where modern stabilizers, plasticizers, flame retarders and other additives are used gets wider. In resolving the matters of the ISS service life extension the main question of hygienic researches is to determine real safe operation life of the polymeric material used in structures and hardware of the station, including: begin{itemize} research of polymers degradation (ageing) and its effect on intensity of off gassing and its toxicity; begin{itemize} introduction of polymers with minimal volatile organic compounds off gassing under conditions of space flight and thermal-oxidative degradation. In order to ensure human safety during long-term flight it is important to develop: begin{itemize} real-time air quality monitoring systems, including on-line analysis of highly toxic contaminants evolving during thermo-oxidative degradation of polymer materials and during blowouts of toxic contaminants; begin{itemize} hygienic standards of contaminants level for extended duration of flight up to 3 years. It is essential to develop an automated control system for on-line monitoring of toxicological status and to develop hygienic and engineer measures of its management in order to ensure crew members safety during off-nominal situation.

  10. Automatic exposure control in multichannel CT with tube current modulation to achieve a constant level of image noise: Experimental assessment on pediatric phantoms

    SciTech Connect

    Brisse, Herve J.; Madec, Ludovic; Gaboriaud, Genevieve; Lemoine, Thomas; Savignoni, Alexia; Neuenschwander, Sylvia; Aubert, Bernard; Rosenwald, Jean-Claude

    2007-07-15

    Automatic exposure control (AEC) systems have been developed by computed tomography (CT) manufacturers to improve the consistency of image quality among patients and to control the absorbed dose. Since a multichannel helical CT scan may easily increase individual radiation doses, this technical improvement is of special interest in children who are particularly sensitive to ionizing radiation, but little information is currently available regarding the precise performance of these systems on small patients. Our objective was to assess an AEC system on pediatric dose phantoms by studying the impact of phantom transmission and acquisition parameters on tube current modulation, on the resulting absorbed dose and on image quality. We used a four-channel CT scan working with a patient-size and z-axis-based AEC system designed to achieve a constant noise within the reconstructed images by automatically adjusting the tube current during acquisition. The study was performed with six cylindrical poly(methylmethacrylate) (PMMA) phantoms of variable diameters (10-32 cm) and one 5 years of age equivalent pediatric anthropomorphic phantom. After a single scan projection radiograph (SPR), helical acquisitions were performed and images were reconstructed with a standard convolution kernel. Tube current modulation was studied with variable SPR settings (tube angle, mA, kVp) and helical parameters (6-20 HU noise indices, 80-140 kVp tube potential, 0.8-4 s. tube rotation time, 5-20 mm x-ray beam thickness, 0.75-1.5 pitch, 1.25-10 mm image thickness, variable acquisition, and reconstruction fields of view). CT dose indices (CTDIvol) were measured, and the image quality criterion used was the standard deviation of the CT number measured in reconstructed images of PMMA material. Observed tube current levels were compared to the expected values from Brooks and Di Chiro's [R.A. Brooks and G.D. Chiro, Med. Phys. 3, 237-240 (1976)] model and calculated values (product of a reference value

  11. GLIMPSE: A decision support tool for simultaneously achieving our air quality management and climate change mitigation goals

    NASA Astrophysics Data System (ADS)

    Pinder, R. W.; Akhtar, F.; Loughlin, D. H.; Henze, D. K.; Bowman, K. W.

    2012-12-01

    Poor air quality, ecosystem damages, and climate change all are caused by the combustion of fossil fuels, yet environmental management often addresses each of these challenges separately. This can lead to sub-optimal strategies and unintended consequences. Here we present GLIMPSE -- a decision support tool for simultaneously achieving our air quality and climate change mitigation goals. GLIMPSE comprises of two types of models, (i) the adjoint of the GEOS-Chem chemical transport model, to calculate the relationship between emissions and impacts at high spatial resolution, and (ii) the MARKAL energy system model, to calculate the relationship between energy technologies and emissions. This presentation will demonstrate how GLIMPSE can be used to explore energy scenarios to better achieve both improved air quality and mitigate climate change. Second, this presentation will discuss how space-based observations can be incorporated into GLIMPSE to improve decision-making. NASA satellite products, namely ozone radiative forcing from the Tropospheric Emission Spectrometer (TES), are used to extend GLIMPSE to include the impact of emissions on ozone radiative forcing. This provides a much needed observational constraint on ozone radiative forcing.

  12. Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment.

    PubMed

    Kalayeh, Mahdi M; Marin, Thibault; Brankov, Jovan G

    2013-06-01

    In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized

  13. Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment

    PubMed Central

    Kalayeh, Mahdi M.; Marin, Thibault; Brankov, Jovan G.

    2014-01-01

    In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized

  14. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images

    NASA Astrophysics Data System (ADS)

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory

    2011-10-01

    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  15. Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Tichauer, Kenneth M.; Samkoe, Kimberley S.; Sexton, Kristian J.; Gunn, Jason R.; Hasan, Tayyaba; Pogue, Brian W.

    2012-06-01

    In this study, we demonstrate a method to quantify biomarker expression that uses an exogenous dual-reporter imaging approach to improve tumor signal detection. The uptake of two fluorophores, one nonspecific and one targeted to the epidermal growth factor receptor (EGFR), were imaged at 1 h in three types of xenograft tumors spanning a range of EGFR expression levels (n=6 in each group). Using this dual-reporter imaging methodology, tumor contrast-to-noise ratio was amplified by >6 times at 1 h postinjection and >2 times at 24 h. Furthermore, by as early as 20 min postinjection, the dual-reporter imaging signal in the tumor correlated significantly with a validated marker of receptor density (P<0.05, r=0.93). Dual-reporter imaging can improve sensitivity and specificity over conventional fluorescence imaging in applications such as fluorescence-guided surgery and directly approximates the receptor status of the tumor, a measure that could be used to inform choices of biological therapies.

  16. Crowdsourcing quality control for Dark Energy Survey images

    DOE PAGES

    Melchior, P.

    2016-07-01

    We have developed a crowdsourcing web application for image quality controlemployed by the Dark Energy Survey. Dubbed the "DES exposure checker", itrenders science-grade images directly to a web browser and allows users to markproblematic features from a set of predefined classes. Users can also generatecustom labels and thus help identify previously unknown problem classes. Userreports are fed back to hardware and software experts to help mitigate andeliminate recognized issues. We report on the implementation of the applicationand our experience with its over 100 users, the majority of which areprofessional or prospective astronomers but not data management experts. Wediscuss aspects ofmore » user training and engagement, and demonstrate how problemreports have been pivotal to rapidly correct artifacts which would likely havebeen too subtle or infrequent to be recognized otherwise. We conclude with anumber of important lessons learned, suggest possible improvements, andrecommend this collective exploratory approach for future astronomical surveysor other extensive data sets with a sufficiently large user base. We alsorelease open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less

  17. Crowdsourcing quality control for Dark Energy Survey images

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Sheldon, E.; Drlica-Wagner, A.; Rykoff, E. S.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Jarvis, M.; Kuehn, K.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Wester, W.; Zhang, Y.

    2016-07-01

    We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo version at http://des-exp-checker.pmelchior.net.

  18. Quality Teaching in Addressing Student Achievement: A Comparative Study between National Board Certified Teachers and Other Teachers on the Kentucky Core Content Test Results

    ERIC Educational Resources Information Center

    Buecker, Harrie Lynne

    2010-01-01

    This dissertation focused on the link between quality teaching and its potential impact on student achievement. National Board Certification is used to represent quality teaching and student achievement is measured by the Kentucky Core Content Test. Data were gathered on the reading and mathematics scores of students of National Board Teachers who…

  19. Imaging-based logics for ornamental stone quality chart definition

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Gargiulo, Aldo; Serranti, Silvia; Raspi, Costantino

    2007-02-01

    Ornamental stone products are commercially classified on the market according to several factors related both to intrinsic lythologic characteristics and to their visible pictorial attributes. Sometimes these latter aspects prevail in quality criteria definition and assessment. Pictorial attributes are in any case also influenced by the performed working actions and the utilized tools selected to realize the final stone manufactured product. Stone surface finishing is a critical task because it can contribute to enhance certain aesthetic features of the stone itself. The study was addressed to develop an innovative set of methodologies and techniques able to quantify the aesthetic quality level of stone products taking into account both the physical and the aesthetical characteristics of the stones. In particular, the degree of polishing of the stone surfaces and the presence of defects have been evaluated, applying digital image processing strategies. Morphological and color parameters have been extracted developing specific software architectures. Results showed as the proposed approaches allow to quantify the degree of polishing and to identify surface defects related to the intrinsic characteristics of the stone and/or the performed working actions.

  20. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  1. Quality Mathematics Instructional Practices Contributing to Student Achievements in Five High-Achieving Asian Education Systems: An Analysis Using TIMSS 2011 Data

    ERIC Educational Resources Information Center

    Cheng, Qiang

    2014-01-01

    Although teaching quality is seen as crucial in affecting students' performance, what types of instructional practices constitute quality teaching remains a question. With the theoretical assumptions of conceptual and procedural mathematics teaching as a guide, this study examined the types of quality mathematics instructional practices that…

  2. Novel Card Games for Learning Radiographic Image Quality and Urologic Imaging in Veterinary Medicine.

    PubMed

    Ober, Christopher P

    2016-01-01

    Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process. PMID:26966984

  3. Novel Card Games for Learning Radiographic Image Quality and Urologic Imaging in Veterinary Medicine.

    PubMed

    Ober, Christopher P

    2016-01-01

    Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process.

  4. Quality Imaging - Comparison of CR Mammography with Screen-Film Mammography

    SciTech Connect

    Gaona, E.; Azorin Nieto, J.; Iran Diaz Gongora, J. A.; Arreola, M.; Casian Castellanos, G.; Perdigon Castaneda, G. M.; Franco Enriquez, J. G.

    2006-09-08

    The aim of this work is a quality imaging comparison of CR mammography images printed to film by a laser printer with screen-film mammography. A Giotto and Elscintec dedicated mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in screen-film mammography. Four CR mammography units from two different manufacturers and three dedicated x-ray mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in CR mammography. The tests quality image included an assessment of system resolution, scoring phantom images, Artifacts, mean optical density and density difference (contrast). In this study, screen-film mammography with a quality control program offers a significantly greater level of quality image relative to CR mammography images printed on film.

  5. Working and learning together: good quality care depends on it, but how can we achieve it?

    PubMed Central

    McPherson, K; Headrick, L; Moss, F

    2001-01-01

    Educating healthcare professionals is a key issue in the provision of quality healthcare services, and interprofessional education (IPE) has been proposed as a means of meeting this challenge. Evidence that collaborative working can be essential for good clinical outcomes underpins the real need to find out how best to develop a work force that can work together effectively. We identify barriers to mounting successful IPE programmes, report on recent educational initiatives that have aimed to develop collaborative working, and discuss the lessons learned. To develop education strategies that really prepare learners to collaborate we must: agree on the goals of IPE, identify effective methods of delivery, establish what should be learned when, attend to the needs of educators and clinicians regarding their own competence in interprofessional work, and advance our knowledge by robust evaluation using both qualitative and quantitative approaches. We must ensure that our education strategies allow students to recognise, value, and engage with the difference arising from the practice of a range of health professionals. This means tackling some long held assumptions about education and identifying where it fosters norms and attitudes that interfere with collaboration or fails to engender interprofessional knowledge and skill. We need to work together to establish education strategies that enhance collaborative working along with profession specific skills to produce a highly skilled, proactive, and respectful work force focused on providing safe and effective health for patients and communities. Key Words: interprofessional education; multiprofessional learning; teamwork PMID:11700379

  6. Quantitative and qualitative image quality analysis of super resolution images from a low cost scanning laser ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Echegaray, Sebastian; Zamora, Gilberto; Soliz, Peter; Bauman, Wendall

    2011-03-01

    The lurking epidemic of eye diseases caused by diabetes and aging will put more than 130 million Americans at risk of blindness by 2020. Screening has been touted as a means to prevent blindness by identifying those individuals at risk. However, the cost of most of today's commercial retinal imaging devices makes their use economically impractical for mass screening. Thus, low cost devices are needed. With these devices, low cost often comes at the expense of image quality with high levels of noise and distortion hindering the clinical evaluation of those retinas. A software-based super resolution (SR) reconstruction methodology that produces images with improved resolution and quality from multiple low resolution (LR) observations is introduced. The LR images are taken with a low-cost Scanning Laser Ophthalmoscope (SLO). The non-redundant information of these LR images is combined to produce a single image in an implementation that also removes noise and imaging distortions while preserving fine blood vessels and small lesions. The feasibility of using the resulting SR images for screening of eye diseases was tested using quantitative and qualitative assessments. Qualitatively, expert image readers evaluated their ability of detecting clinically significant features on the SR images and compared their findings with those obtained from matching images of the same eyes taken with commercially available high-end cameras. Quantitatively, measures of image quality were calculated from SR images and compared to subject-matched images from a commercial fundus imager. Our results show that the SR images have indeed enough quality and spatial detail for screening purposes.

  7. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    SciTech Connect

    Wang, Qi; Wang, Junting; Lu, Qingyou; Hou, Yubin

    2013-11-15

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d{sub 31} coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  8. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hou, Yubin; Wang, Junting; Lu, Qingyou

    2013-11-01

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d31 coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  9. SU-C-304-04: A Compact Modular Computational Platform for Automated On-Board Imager Quality Assurance

    SciTech Connect

    Dolly, S; Cai, B; Chen, H; Anastasio, M; Sun, B; Yaddanapudi, S; Noel, C; Goddu, S; Mutic, S; Li, H; Tan, J

    2015-06-15

    Purpose: Traditionally, the assessment of X-ray tube output and detector positioning accuracy of on-board imagers (OBI) has been performed manually and subjectively with rulers and dosimeters, and typically takes hours to complete. In this study, we have designed a compact modular computational platform to automatically analyze OBI images acquired with in-house designed phantoms as an efficient and robust surrogate. Methods: The platform was developed as an integrated and automated image analysis-based platform using MATLAB for easy modification and maintenance. Given a set of images acquired with the in-house designed phantoms, the X-ray output accuracy was examined via cross-validation of the uniqueness and integration minimization of important image quality assessment metrics, while machine geometric and positioning accuracy were validated by utilizing pattern-recognition based image analysis techniques. Results: The platform input was a set of images of an in-house designed phantom. The total processing time is about 1–2 minutes. Based on the data acquired from three Varian Truebeam machines over the course of 3 months, the designed test validation strategy achieved higher accuracy than traditional methods. The kVp output accuracy can be verified within +/−2 kVp, the exposure accuracy within 2%, and exposure linearity with a coefficient of variation (CV) of 0.1. Sub-millimeter position accuracy was achieved for the lateral and longitudinal positioning tests, while vertical positioning accuracy within +/−2 mm was achieved. Conclusion: This new platform delivers to the radiotherapy field an automated, efficient, and stable image analysis-based procedure, for the first time, acting as a surrogate for traditional tests for LINAC OBI systems. It has great potential to facilitate OBI quality assurance (QA) with the assistance of advanced image processing techniques. In addition, it provides flexible integration of additional tests for expediting other OBI

  10. Dosimetric and image quality assessment of different acquisition protocols of a novel 64-slice CT scanner

    NASA Astrophysics Data System (ADS)

    Vite, Cristina; Mangini, Monica; Strocchi, Sabina; Novario, Raffaele; Tanzi, Fabio; Carrafiello, Gianpaolo; Conte, Leopoldo; Fugazzola, Carlo

    2006-03-01

    Dose and image quality assessment in computed tomography (CT) are almost affected by the vast variety of CT scanners (axial CT, spiral CT, low-multislice CT (2-16), high-multislice CT (32-64)) and imaging protocols in use. Very poor information is at the moment available on 64 slices CT scanners. Aim of this work is to assess image quality related to patient dose indexes and to investigate the achievable dose reduction for a commercially available 64 slices CT scanner. CT dose indexes (weighted computed tomography dose index, CTDI w and Dose Length Product, DLP) were measured with a standard CT phantom for the main protocols in use (head, chest, abdomen and pelvis) and compared with the values displayed by the scanner itself. The differences were always below 7%. All the indexes were below the Diagnostic Reference Levels defined by the European Council Directive 97/42. Effective doses were measured for each protocol with thermoluminescent dosimeters inserted in an anthropomorphic Alderson Rando phantom and compared with the same values computed by the ImPACT CT Patient Dosimetry Calculator software code and corrected by a factor taking in account the number of slices (from 16 to 64). The differences were always below 25%. The effective doses range from 1.5 mSv (head) to 21.8 mSv (abdomen). The dose reduction system of the scanner was assessed comparing the effective dose measured for a standard phantom-man (a cylinder phantom, 32 cm in diameter) to the mean dose evaluated on 46 patients. The standard phantom was considered as no dose reduction reference. The dose reduction factor range from 16% to 78% (mean of 46%) for all protocols, from 29% to 78% (mean of 55%) for chest protocol, from 16% to 76% (mean of 42%) for abdomen protocol. The possibility of a further dose reduction was investigated measuring image quality (spatial resolution, contrast and noise) as a function of CTDI w. This curve shows a quite flat trend decreasing the dose approximately to 90% and a

  11. Improved ultrasonic TV images achieved by use of Lamb-wave orientation technique

    NASA Technical Reports Server (NTRS)

    Berger, H.

    1967-01-01

    Lamb-wave sample orientation technique minimizes the interference from standing waves in continuous wave ultrasonic television imaging techniques used with thin metallic samples. The sample under investigation is oriented such that the wave incident upon it is not normal, but slightly angled.

  12. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    PubMed

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-01-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology. PMID:27358000

  13. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    NASA Astrophysics Data System (ADS)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  14. Intravascular imaging in Kounis syndrome: role of IVUS and OCT in achieving an etiopathogenic diagnosis

    PubMed Central

    Domínguez, Fernando; Santos, Susana Mingo; Escudier-Villa, Juan Manuel; Jiménez-Sánchez, Diego; Artaza, Josebe Goirigolzarri; Alonso-Pulpón, Luis; Goicolea, Javier

    2015-01-01

    We report a case of a 60-year-old male patient presenting with anaphylactic response to anchovies associated with an acute coronary syndrome. His history was remarkable for coronary artery disease treated with a drug eluting stent to the right coronary artery six years before and stent fracture documented by coronary angiography four years prior to the event. Coronary angiography on admission revealed a very late stent thrombosis (VLST) in the right coronary artery. Intracoronary imaging techniques (IVUS and OCT) were used and were key to discard main causes of VLST. We described the characteristics of intracoronary images, along with the advantages and disadvantages of these techniques. The findings described in this case could explain a new physiopathological mechanism of stent thrombosis occurring in stent fractures. PMID:25774348

  15. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    PubMed Central

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-01-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology. PMID:27358000

  16. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    PubMed

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-30

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  17. Fast T2-weighted MR imaging: impact of variation in pulse sequence parameters on image quality and artifacts.

    PubMed

    Li, Tao; Mirowitz, Scott A

    2003-09-01

    The purpose of this study was to quantitatively evaluate in a phantom model the practical impact of alteration of key imaging parameters on image quality and artifacts for the most commonly used fast T(2)-weighted MR sequences. These include fast spin-echo (FSE), single shot fast spin-echo (SSFSE), and spin-echo echo-planar imaging (EPI) pulse sequences. We developed a composite phantom with different T1 and T2 values, which was evaluated while stationary as well as during periodic motion. Experiments involved controlled variations in key parameters including effective TE, TR, echo spacing (ESP), receive bandwidth (BW), echo train length (ETL), and shot number (SN). Quantitative analysis consisted of signal-to-noise ratio (SNR), image nonuniformity, full-width-at-half-maximum (i.e., blurring or geometric distortion) and ghosting ratio. Among the fast T(2)-weighted sequences, EPI was most sensitive to alterations in imaging parameters. Among imaging parameters that we tested, effective TE, ETL, and shot number most prominently affected image quality and artifacts. Short T(2) objects were more sensitive to alterations in imaging parameters in terms of image quality and artifacts. Optimal clinical application of these fast T(2)-weighted imaging pulse sequences requires careful attention to selection of imaging parameters.

  18. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  19. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-06-14

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models.

  20. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging

    PubMed Central

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  1. Agreement between objective and subjective assessment of image quality in ultrasound abdominal aortic aneurism screening

    PubMed Central

    Wolstenhulme, S; Keeble, C; Moore, S; Evans, J A

    2015-01-01

    Objective: To investigate agreement between objective and subjective assessment of image quality of ultrasound scanners used for abdominal aortic aneurysm (AAA) screening. Methods: Nine ultrasound scanners were used to acquire longitudinal and transverse images of the abdominal aorta. 100 images were acquired per scanner from which 5 longitudinal and 5 transverse images were randomly selected. 33 practitioners scored 90 images blinded to the scanner type and subject characteristics and were required to state whether or not the images were of adequate diagnostic quality. Odds ratios were used to rank the subjective image quality of the scanners. For objective testing, three standard test objects were used to assess penetration and resolution and used to rank the scanners. Results: The subjective diagnostic image quality was ten times greater for the highest ranked scanner than for the lowest ranked scanner. It was greater at depths of <5.0 cm (odds ratio, 6.69; 95% confidence interval, 3.56, 12.57) than at depths of 15.1–20.0 cm. There was a larger range of odds ratios for transverse images than for longitudinal images. No relationship was seen between subjective scanner rankings and test object scores. Conclusion: Large variation was seen in the image quality when evaluated both subjectively and objectively. Objective scores did not predict subjective scanner rankings. Further work is needed to investigate the utility of both subjective and objective image quality measurements. Advances in knowledge: Ratings of clinical image quality and image quality measured using test objects did not agree, even in the limited scenario of AAA screening. PMID:25494526

  2. Filling factor characteristics of masking phase-only hologram on the quality of reconstructed images

    NASA Astrophysics Data System (ADS)

    Deng, Yuanbo; Chu, Daping

    2016-03-01

    The present study evaluates the filling factor characteristics of masking phase-only hologram on its corresponding reconstructed image. A square aperture with different filling factor is added on the phase-only hologram of the target image, and average cross-section intensity profile of the reconstructed image is obtained and deconvolved with that of the target image to calculate the point spread function (PSF) of the image. Meanwhile, Lena image is used as the target image and evaluated by metrics RMSE and SSIM to assess the quality of reconstructed image. The results show that the PSF of the image agrees with the PSF of the Fourier transform of the mask, and as the filling factor of the mask decreases, the width of PSF increases and the quality of reconstructed image drops. These characteristics could be used in practical situations where phase-only hologram is confined or need to be sliced or tiled.

  3. High-quality remote interactive imaging in the operating theatre

    NASA Astrophysics Data System (ADS)

    Grimstead, Ian J.; Avis, Nick J.; Evans, Peter L.; Bocca, Alan

    2009-02-01

    We present a high-quality display system that enables the remote access within an operating theatre of high-end medical imaging and surgical planning software. Currently, surgeons often use printouts from such software for reference during surgery; our system enables surgeons to access and review patient data in a sterile environment, viewing real-time renderings of MRI & CT data as required. Once calibrated, our system displays shades of grey in Operating Room lighting conditions (removing any gamma correction artefacts). Our system does not require any expensive display hardware, is unobtrusive to the remote workstation and works with any application without requiring additional software licenses. To extend the native 256 levels of grey supported by a standard LCD monitor, we have used the concept of "PseudoGrey" where slightly off-white shades of grey are used to extend the intensity range from 256 to 1,785 shades of grey. Remote access is facilitated by a customized version of UltraVNC, which corrects remote shades of grey for display in the Operating Room. The system is successfully deployed at Morriston Hospital, Swansea, UK, and is in daily use during Maxillofacial surgery. More formal user trials and quantitative assessments are being planned for the future.

  4. Improved image quality and computation reduction in 4-D reconstruction of cardiac-gated SPECT images.

    PubMed

    Narayanan, M V; King, M A; Wernick, M N; Byrne, C L; Soares, E J; Pretorius, P H

    2000-05-01

    Spatiotemporal reconstruction of cardiac-gated SPECT images permits us to obtain valuable information related to cardiac function. However, the task of reconstructing this four-dimensional (4-D) data set is computation intensive. Typically, these studies are reconstructed frame-by-frame: a nonoptimal approach because temporal correlations in the signal are not accounted for. In this work, we show that the compression and signal decorrelation properties of the Karhunen-Loève (KL) transform may be used to greatly simplify the spatiotemporal reconstruction problem. The gated projections are first KL transformed in the temporal direction. This results in a sequence of KL-transformed projection images for which the signal components are uncorrelated along the time axis. As a result, the 4-D reconstruction task is simplified to a series of three-dimensional (3-D) reconstructions in the KL domain. The reconstructed KL components are subsequently inverse KL transformed to obtain the entire spatiotemporal reconstruction set. Our simulation and clinical results indicate that KL processing provides image sequences that are less noisy than are conventional frame-by-frame reconstructions. Additionally, by discarding high-order KL components that are dominated by noise, we can achieve savings in computation time because fewer reconstructions are needed in comparison to conventional frame-by-frame reconstructions.

  5. Medical Image Processing Server applied to Quality Control of Nuclear Medicine.

    NASA Astrophysics Data System (ADS)

    Vergara, C.; Graffigna, J. P.; Marino, E.; Omati, S.; Holleywell, P.

    2016-04-01

    This paper is framed within the area of medical image processing and aims to present the process of installation, configuration and implementation of a processing server of medical images (MIPS) in the Fundación Escuela de Medicina Nuclear located in Mendoza, Argentina (FUESMEN). It has been developed in the Gabinete de Tecnologia Médica (GA.TE.ME), Facultad de Ingeniería-Universidad Nacional de San Juan. MIPS is a software that using the DICOM standard, can receive medical imaging studies of different modalities or viewing stations, then it executes algorithms and finally returns the results to other devices. To achieve the objectives previously mentioned, preliminary tests were conducted in the laboratory. More over, tools were remotely installed in clinical enviroment. The appropiate protocols for setting up and using them in different services were established once defined those suitable algorithms. Finally, it’s important to focus on the implementation and training that is provided in FUESMEN, using nuclear medicine quality control processes. Results on implementation are exposed in this work.

  6. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.

    PubMed

    Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C

    2014-02-01

    It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm. PMID:26270911

  7. Performance of three image-quality metrics in ink-jet printing of plain papers

    NASA Astrophysics Data System (ADS)

    Lee, David L.; Winslow, Alan T.

    1993-07-01

    Three image-quality metrics are evaluated: Hamerly's edge raggedness, or tangential edge profile; Granger and Cupery's subjective quality factor (SQF) derived from the second moment of the line spread function; and SQF derived from Gur and O'Donnell's reflectance transfer function. These metrics are but a handful of many in the literature. Standard office papers from North America and Europe representing a broad spectrum of what is commercially available were printed with a 300-dpi Hewlett-Packard Deskjet printer. An untrained panel of eight judges viewed text, in a variety of fonts, and a graphics target and assigned each print an integer score based on its overall quality. Analysis of the metrics revealed that Granger's SQF had the highest correlation with panel rank, and achieved a level of precision approaching single-judge error, that is, the ranking error made by an individual judge. While the other measures correlated in varying degrees, they were less precise. This paper reviews their theory, measurement, and performance.

  8. Evaluation of image quality of MRI data for brain tumor surgery

    NASA Astrophysics Data System (ADS)

    Heckel, Frank; Arlt, Felix; Geisler, Benjamin; Zidowitz, Stephan; Neumuth, Thomas

    2016-03-01

    3D medical images are important components of modern medicine. Their usefulness for the physician depends on their quality, though. Only high-quality images allow accurate and reproducible diagnosis and appropriate support during treatment. We have analyzed 202 MRI images for brain tumor surgery in a retrospective study. Both an experienced neurosurgeon and an experienced neuroradiologist rated each available image with respect to its role in the clinical workflow, its suitability for this specific role, various image quality characteristics, and imaging artifacts. Our results show that MRI data acquired for brain tumor surgery does not always fulfill the required quality standards and that there is a significant disagreement between the surgeon and the radiologist, with the surgeon being more critical. Noise, resolution, as well as the coverage of anatomical structures were the most important criteria for the surgeon, while the radiologist was mainly disturbed by motion artifacts.

  9. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  10. Comparison of Through-Focus Image Quality Across Five Presbyopia-Correcting Intraocular Lenses (An American Ophthalmological Society Thesis)

    PubMed Central

    Pepose, Jay S.; Wang, Daozhi; Altmann, Griffith E.

    2011-01-01

    Purpose To assess through-focus polychromatic image sharpness of five US Food and Drug Administration–approved presbyopia-correcting intraocular lenses (IOLs) through a range of object vergences and pupil diameters utilizing an image sharpness algorithm. Methods A 1951 US Air Force resolution target was imaged through a Crystalens AO (AO) (Bausch & Lomb Surgical, Aliso Viejo, California), Crystalens HD (HD) (Bausch & Lomb Surgical, Aliso Viejo, California), aspheric ReSTOR +4.0 (R4) (Alcon Laboratories, Fort Worth, Texas), aspheric ReSTOR +3.0 (R3) (Alcon Laboratories, Fort Worth, Texas), and Tecnis Multifocal Acrylic (TMF) (Abbott Medical Optics, Irvine, California) IOL in an anatomically and optically accurate model eye and captured digitally for each combination of pupil diameter and object vergence. The sharpness of each digital image was objectively scored using a two-dimensional gradient function. Results The AO lens had the best distance image sharpness for all pupil diameters, followed by the HD. With a 5-mm pupil, the R4 lens achieved distance image quality similar to the HD, but inferior to the AO. The R3 successfully moved the near focal point farther from the patient compared to the R4, but did not improve image sharpness at intermediate distances and showed worse distance and near image sharpness. Consistent with apodization, the ReSTOR IOLs displayed better distance and poorer near image sharpness as pupil diameter increased. The TMF lens showed consistent distance and near image sharpness across pupil diameters and exhibited the best near image sharpness for all pupil diameters. Conclusions Differing IOL design strategies to increase depth of field are associated with quantifiable differences in image sharpness at varying vergences and pupil sizes. An objective comparison of the imaging properties of specific presbyopia-correcting IOLs, in conjunction with patients’ pupil sizes, can be useful in selecting the most appropriate IOL for each patient

  11. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety

    PubMed Central

    Huang, Hui; Liu, Li; Ngadi, Michael O.

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  12. Optimizing the anode-filter combination in the sense of image quality and average glandular dose in digital mammography

    NASA Astrophysics Data System (ADS)

    Varjonen, Mari; Strömmer, Pekka

    2008-03-01

    This paper presents the optimized image quality and average glandular dose in digital mammography, and provides recommendations concerning anode-filter combinations in digital mammography, which is based on amorphous selenium (a-Se) detector technology. The full field digital mammography (FFDM) system based on a-Se technology, which is also a platform of tomosynthesis prototype, was used in this study. X-ray tube anode-filter combinations, which we studied, were tungsten (W) - rhodium (Rh) and tungsten (W) - silver (Ag). Anatomically adaptable fully automatic exposure control (AAEC) was used. The average glandular doses (AGD) were calculated using a specific program developed by Planmed, which automates the method described by Dance et al. Image quality was evaluated in two different ways: a subjective image quality evaluation, and contrast and noise analysis. By using W-Rh and W-Ag anode-filter combinations can be achieved a significantly lower average glandular dose compared with molybdenum (Mo) - molybdenum (Mo) or Mo-Rh. The average glandular dose reduction was achieved from 25 % to 60 %. In the future, the evaluation will concentrate to study more filter combinations and the effect of higher kV (>35 kV) values, which seems be useful while optimizing the dose in digital mammography.

  13. Quantitative measurement of holographic image quality using Adobe Photoshop

    NASA Astrophysics Data System (ADS)

    Wesly, E.

    2013-02-01

    Measurement of the characteristics of image holograms in regards to diffraction efficiency and signal to noise ratio are demonstrated, using readily available digital cameras and image editing software. Illustrations and case studies, using currently available holographic recording materials, are presented.

  14. How do we watch images? A case of change detection and quality estimation

    NASA Astrophysics Data System (ADS)

    Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte

    2012-01-01

    The most common tasks in subjective image estimation are change detection (a detection task) and image quality estimation (a preference task). We examined how the task influences the gaze behavior when comparing detection and preference tasks. The eye movements of 16 naïve observers were recorded with 8 observers in both tasks. The setting was a flicker paradigm, where the observers see a non-manipulated image, a manipulated version of the image and again the non-manipulated image and estimate the difference they perceived in them. The material was photographic material with different image distortions and contents. To examine the spatial distribution of fixations, we defined the regions of interest using a memory task and calculated information entropy to estimate how concentrated the fixations were on the image plane. The quality task was faster and needed fewer fixations and the first eight fixations were more concentrated on certain image areas than the change detection task. The bottom-up influences of the image also caused more variation to the gaze behavior in the quality estimation task than in the change detection task The results show that the quality estimation is faster and the regions of interest are emphasized more on certain images compared with the change detection task that is a scan task where the whole image is always thoroughly examined. In conclusion, in subjective image estimation studies it is important to think about the task.

  15. Multimodal Imaging and Lighting Bias Correction for Improved μPAD-based Water Quality Monitoring via Smartphones

    NASA Astrophysics Data System (ADS)

    McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol

    2016-06-01

    Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants – Cr(VI), total chlorine, caffeine, and E. coli K12 – at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.

  16. Multimodal Imaging and Lighting Bias Correction for Improved μPAD-based Water Quality Monitoring via Smartphones

    PubMed Central

    McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol

    2016-01-01

    Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants – Cr(VI), total chlorine, caffeine, and E. coli K12 – at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring. PMID:27283336

  17. Multimodal Imaging and Lighting Bias Correction for Improved μPAD-based Water Quality Monitoring via Smartphones.

    PubMed

    McCracken, Katherine E; Angus, Scott V; Reynolds, Kelly A; Yoon, Jeong-Yeol

    2016-06-10

    Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.

  18. Multimodal Imaging and Lighting Bias Correction for Improved μPAD-based Water Quality Monitoring via Smartphones.

    PubMed

    McCracken, Katherine E; Angus, Scott V; Reynolds, Kelly A; Yoon, Jeong-Yeol

    2016-01-01

    Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring. PMID:27283336

  19. Combining hard and soft magnetism into a single core-shell nanoparticle to achieve both hyperthermia and image contrast

    PubMed Central

    Yang, Qiuhong; Gong, Maogang; Cai, Shuang; Zhang, Ti; Douglas, Justin T; Chikan, Viktor; Davies, Neal M; Lee, Phil; Choi, In-Young; Ren, Shenqiang; Forrest, M Laird

    2015-01-01

    Background A biocompatible core/shell structured magnetic nanoparticles (MNPs) was developed to mediate simultaneous cancer therapy and imaging. Methods & results A 22-nm MNP was first synthesized via magnetically coupling hard (FePt) and soft (Fe3O4) materials to produce high relative energy transfer. Colloidal stability of the FePt@Fe3O4 MNPs was achieved through surface modification with silane-polyethylene glycol (PEG). Intravenous administration of PEG-MNPs into tumor-bearing mice resulted in a sustained particle accumulation in the tumor region, and the tumor burden of treated mice was a third that of the mice in control groups 2 weeks after a local hyperthermia treatment. In vivo magnetic resonance imaging exhibited enhanced T2 contrast in the tumor region. Conclusion This work has demonstrated the feasibility of cancer theranostics with PEG-MNPs. PMID:26606855

  20. Non-reference quality assessment of infrared images reconstructed by compressive sensing

    NASA Astrophysics Data System (ADS)

    Ospina-Borras, J. E.; Benitez-Restrepo, H. D.

    2015-01-01

    Infrared (IR) images are representations of the world and have natural features like images in the visible spectrum. As such, natural features from infrared images support image quality assessment (IQA).1 In this work, we compare the quality of a set of indoor and outdoor IR images reconstructed from measurement functions formed by linear combination of their pixels. The reconstruction methods are: linear discrete cosine transform (DCT) acquisition, DCT augmented with total variation minimization, and compressive sensing scheme. Peak Signal to Noise Ratio (PSNR), three full-reference (FR), and four no-reference (NR) IQA measures compute the qualities of each reconstruction: multi-scale structural similarity (MSSIM), visual information fidelity (VIF), information fidelity criterion (IFC), sharpness identification based on local phase coherence (LPC-SI), blind/referenceless image spatial quality evaluator (BRISQUE), naturalness image quality evaluator (NIQE) and gradient singular value decomposition (GSVD), respectively. Each measure is compared to human scores that were obtained by differential mean opinion score (DMOS) test. We observe that GSVD has the highest correlation coefficients of all NR measures, but all FR have better performance. We use MSSIM to compare the reconstruction methods and we find that CS scheme produces a good-quality IR image, using only 30000 random sub-samples and 1000 DCT coefficients (2%). In contrast, linear DCT provides higher correlation coefficients than CS scheme by using all the pixels of the image and 31000 DCT (47%) coefficients.

  1. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    PubMed

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  2. No-Reference Image Quality Assessment for ZY3 Imagery in Urban Areas Using Statistical Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Cui, W. H.; Yang, F.; Wu, Z. C.

    2016-06-01

    More and more high-spatial resolution satellite images are produced with the improvement of satellite technology. However, the quality of images is not always satisfactory for application. Due to the impact of complicated atmospheric conditions and complex radiation transmission process in imaging process the images often suffer deterioration. In order to assess the quality of remote sensing images over urban areas, we proposed a general purpose image quality assessment methods based on feature extraction and machine learning. We use two types of features in multi scales. One is from the shape of histogram the other is from the natural scene statistics based on Generalized Gaussian distribution (GGD). A 20-D feature vector for each scale is extracted and is assumed to capture the RS image quality degradation characteristics. We use SVM to learn to predict image quality scores from these features. In order to do the evaluation, we construct a median scale dataset for training and testing with subjects taking part in to give the human opinions of degraded images. We use ZY3 satellite images over Wuhan area (a city in China) to conduct experiments. Experimental results show the correlation of the predicted scores and the subjective perceptions.

  3. Quality evaluation of adaptive optical image based on DCT and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Li, Junwei; Wang, Jing; Deng, Rong; Dong, Yanbing

    2015-04-01

    The adaptive optical telescopes play a more and more important role in the detection system on the ground, and the adaptive optical images are so many that we need find a suitable method of quality evaluation to choose good quality images automatically in order to save human power. It is well known that the adaptive optical images are no-reference images. In this paper, a new logarithmic evaluation method based on the use of the discrete cosine transform(DCT) and Rényi entropy for the adaptive optical images is proposed. Through the DCT using one or two dimension window, the statistical property of Rényi entropy for images is studied. The different directional Rényi entropy maps of an input image containing different information content are obtained. The mean values of different directional Rényi entropy maps are calculated. For image quality evaluation, the different directional Rényi entropy and its standard deviation corresponding to region of interest is selected as an indicator for the anisotropy of the images. The standard deviation of different directional Rényi entropy is obtained as the quality evaluation value for adaptive optical image. Experimental results show the proposed method that the sorting quality matches well with the visual inspection.

  4. Single particle quantum dot imaging achieves ultrasensitive detection capabilities for Western immunoblot analysis.

    PubMed

    Scholl, Benjamin; Liu, Hong Yan; Long, Brian R; McCarty, Owen J T; O'Hare, Thomas; Druker, Brian J; Vu, Tania Q

    2009-06-23

    Substantially improved detection methods are needed to detect fractionated protein samples present at trace concentrations in complex, heterogeneous tissue and biofluid samples. Here we describe a modification of traditional Western immunoblotting using a technique to count quantum-dot-tagged proteins on optically transparent PVDF membranes. Counts of quantum-dot-tagged proteins on immunoblots achieved optimal detection sensitivity of 0.2 pg and a sample size of 100 cells. This translates to a 10(3)-fold improvement in detection sensitivity and a 10(2)-fold reduction in required cell sample, compared to traditional Westerns processed using the same membrane immunoblots. Quantum dot fluorescent blinking analysis showed that detection of single QD-tagged proteins is possible and that detected points of fluorescence consist of one or a few (<9) QDs. The application of single nanoparticle detection capabilities to Western blotting technologies may provide a new solution to a broad range of applications currently limited by insufficient detection sensitivity and/or sample availability.

  5. An image-based technique to assess the perceptual quality of clinical chest radiographs

    SciTech Connect

    Lin Yuan; Luo Hui; Dobbins, James T. III; Page McAdams, H.; Wang, Xiaohui; Sehnert, William J.; Barski, Lori; Foos, David H.; Samei, Ehsan

    2012-11-15

    Purpose: Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of factors such as the capture system modulation transfer function, noise power spectrum, detective quantum efficiency, and the exposure technique. While these elements form the basic underlying components of image quality, when assessing a clinical image, radiologists seldom refer to these factors, but rather examine several specific regions of the displayed patient images, further impacted by a particular image processing method applied, to see whether the image is suitable for diagnosis. In this paper, the authors developed a novel strategy to simulate radiologists' perceptual evaluation process on actual clinical chest images. Methods: Ten regional based perceptual attributes of chest radiographs were determined through an observer study. Those included lung grey level, lung detail, lung noise, rib-lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. Each attribute was characterized in terms of a physical quantity measured from the image algorithmically using an automated process. A pilot observer study was performed on 333 digital chest radiographs, which included 179 PA images with 10:1 ratio grids (set 1) and 154 AP images without grids (set 2), to ascertain the correlation between image perceptual attributes and physical quantitative measurements. To determine the acceptable range of each perceptual attribute, a preliminary quality consistency range was defined based on the preferred 80% of images in set 1. Mean value difference ({mu}{sub 1}-{mu}{sub 2}) and variance ratio ({sigma}{sub 1}{sup 2}/{sigma}{sub 2}{sup 2}) were investigated to further quantify the differences between the selected two image sets. Results: The pilot observer study demonstrated that our regional based physical quantity metrics of chest radiographs correlated very well with

  6. Local homogeneity combined with DCT statistics to blind noisy image quality assessment

    NASA Astrophysics Data System (ADS)

    Yang, Lingxian; Chen, Li; Chen, Heping

    2015-03-01

    In this paper a novel method for blind noisy image quality assessment is proposed. First, it is believed that human visual system (HVS) is more sensitive to the local smoothness area in a noise image, an adaptively local homogeneous block selection algorithm is proposed to construct a new homogeneous image named as homogeneity blocks (HB) based on computing each pixel characteristic. Second, applying the discrete cosine transform (DCT) for each HB and using high frequency component to evaluate image noise level. Finally, a modified peak signal to noise ratio (MPSNR) image quality assessment approach is proposed based on analysis DCT kurtosis distributions change and noise level above-mentioned. Simulations show that the quality scores that produced from the proposed algorithm are well correlated with the human perception of quality and also have a stability performance.

  7. INCITS W1.1 development update: appearance-based image quality standards for printers

    NASA Astrophysics Data System (ADS)

    Zeise, Eric K.; Rasmussen, D. René; Ng, Yee S.; Dalal, Edul; McCarthy, Ann; Williams, Don

    2008-01-01

    In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard. (1),(2) The resulting W1.1 project is based on a proposal (3) that perceived image quality can be described by a small set of broad-based attributes. There are currently six ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress of the teams addressing the attributes of Macro-Uniformity, Colour Rendition, Gloss & Gloss Uniformity, Text & Line Quality and Effective Resolution.

  8. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE PAGES

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    2016-02-28

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  9. Factors affecting computed tomography image quality for assessment of mechanical aortic valves.

    PubMed

    Suh, Young Joo; Kim, Young Jin; Hong, Yoo Jin; Lee, Hye-Jeong; Hur, Jin; Hong, Sae Rom; Im, Dong Jin; Kim, Yun Jung; Choi, Byoung Wook

    2016-06-01

    Evaluating mechanical valves with computed tomography (CT) can be problematic because artifacts from the metallic components of valves can hamper image quality. The purpose of this study was to determine factors affecting the image quality of cardiac CT to improve assessment of mechanical aortic valves. A total of 144 patients who underwent aortic valve replacement with mechanical valves (ten different types) and who underwent cardiac CT were included. Using a four-point grading system, the image quality of the CT scans was assessed for visibility of the valve leaflets and the subvalvular regions. Data regarding the type of mechanical valve, tube voltage, average heart rate (HR), and HR variability during CT scanning were compared between the non-diagnostic (overall image quality score ≤2) and diagnostic (overall image quality score >2) image quality groups. Logistic regression analyses were performed to identify predictors of non-diagnostic image quality. The percentage of valve types that incorporated a cobalt-chrome component (two types in total) and HR variability were significantly higher in the non-diagnostic image group than in the diagnostic group (P < 0.001 and P = 0.013, respectively). The average HR and tube voltage were not significantly different between the two groups (P > 0.05). Valve type was the only independent predictor of non-diagnostic quality. The CT image quality for patients with mechanical aortic valves differed significantly depending on the type of mechanical valve used and on the degree of HR variability.

  10. MTF as a quality measure for compressed images transmitted over computer networks

    NASA Astrophysics Data System (ADS)

    Hadar, Ofer; Stern, Adrian; Huber, Merav; Huber, Revital

    1999-12-01

    One result of the recent advances in different components of imaging systems technology is that, these systems have become more resolution-limited and less noise-limited. The most useful tool utilized in characterization of resolution- limited systems is the Modulation Transfer Function (MTF). The goal of this work is to use the MTF as an image quality measure of image compression implemented by the JPEG (Joint Photographic Expert Group) algorithm and transmitted MPEG (Motion Picture Expert Group) compressed video stream through a lossy packet network. Although we realize that the MTF is not an ideal parameter with which to measure image quality after compression and transmission due to the non- linearity shift invariant process, we examine the conditions under which it can be used as an approximated criterion for image quality. The advantage in using the MTF of the compression algorithm is that it can be easily combined with the overall MTF of the imaging system.

  11. The study on the image quality of varied line spacing plane grating by computer simulation

    NASA Astrophysics Data System (ADS)

    Sun, Shouqiang; Zhang, Weiping; Liu, Lei; Yang, Qingyi

    2014-11-01

    Varied line spacing plane gratings have the features of self-focusing , aberration-reduced and easy manufacturing ,which are widely applied in synchrotron radiation, plasma physics and space astronomy, and other fields. In the study of diffracting imaging , the optical path function is expanded into maclaurin series, aberrations are expressed by the coefficient of series, most of the aberration coefficients are similar and the category is more, can't directly reflects image quality in whole. The paper will study on diffraction imaging of the varied line spacing plane gratings by using computer simulation technology, for a method judging the image quality visibly. In this paper, light beam from some object points on the same object plane are analyzed and simulated by ray trace method , the evaluation function is set up, which can fully scale the image quality. In addition, based on the evaluation function, the best image plane is found by search algorithm .

  12. Near-infrared hyperspectral imaging for quality analysis of agricultural and food products

    NASA Astrophysics Data System (ADS)

    Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.

    2010-04-01

    Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.

  13. A conceptual study of automatic and semi-automatic quality assurance techniques for round image processing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.

  14. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting

    PubMed Central

    Yang, Jiachen; Lin, Yancong; Gao, Zhiqun; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The human visual system (HVS) plays an important role in stereo image quality perception. Therefore, it has aroused many people’s interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF) and weighted multi-scale (MS-SSIM). Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels. PMID:26717412

  15. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting.

    PubMed

    Yang, Jiachen; Lin, Yancong; Gao, Zhiqun; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-01-01

    The human visual system (HVS) plays an important role in stereo image quality perception. Therefore, it has aroused many people's interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF) and weighted multi-scale (MS-SSIM). Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels. PMID:26717412

  16. Application of wavelets to the evaluation of phantom images for mammography quality control.

    PubMed

    Alvarez, M; Pina, D R; Miranda, J R A; Duarte, S B

    2012-11-01

    The main goal of this work was to develop a methodology for the computed analysis of American College of Radiology (ACR) mammographic phantom images, to be used in a quality control (QC) program of mammographic services. Discrete wavelet transform processing was applied to enhance the quality of images from the ACR mammographic phantom and to allow a lower dose for automatic evaluations of equipment performance in a QC program. Regions of interest (ROIs) containing phantom test objects (e.g., masses, fibers and specks) were focalized for appropriate wavelet processing, which highlighted the characteristics of structures present in each ROI. To minimize false-positive detection, each ROI in the image was submitted to pattern recognition tests, which identified structural details of the focalized test objects. Geometric and morphologic parameters of the processed test object images were used to quantify the final level of image quality. The final purpose of this work was to establish the main computational procedures for algorithms of quality evaluation of ACR phantom images. These procedures were implemented, and satisfactory agreement was obtained when the algorithm scores for image quality were compared with the results of assessments by three experienced radiologists. An exploratory study of a potential dose reduction was performed based on the radiologist scores and on the algorithm evaluation of images treated by wavelet processing. The results were comparable with both methods, although the algorithm had a tendency to provide a lower dose reduction than the evaluation by observers. Nevertheless, the objective and more precise criteria used by the algorithm to score image quality gave the computational result a higher degree of confidence. The developed algorithm demonstrates the potential use of the wavelet image processing approach for objectively evaluating the mammographic image quality level in routine QC tests. The implemented computational procedures

  17. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  18. Scientific assessment of the quality of OSIRIS images

    NASA Astrophysics Data System (ADS)

    Tubiana, C.; Güttler, C.; Kovacs, G.; Bertini, I.; Bodewits, D.; Fornasier, S.; Lara, L.; La Forgia, F.; Magrin, S.; Pajola, M.; Sierks, H.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Agarwal, J.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Boudreault, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; De Cecco, M.; El-Maarry, M. R.; Fulle, M.; Groussin, O.; Gutiérrez-Marques, P.; Gutiérrez, P. J.; Hoekzema, N.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lazzarin, M.; Lopez Moreno, J. J.; Marzari, F.; Massironi, M.; Michalik, H.; Moissl, R.; Naletto, G.; Oklay, N.; Scholten, F.; Shi, X.; Thomas, N.; Vincent, J.-B.

    2015-11-01

    Context. OSIRIS, the scientific imaging system onboard the ESA Rosetta spacecraft, has been imaging the nucleus of comet 67P/Churyumov-Gerasimenko and its dust and gas environment since March 2014. The images serve different scientific goals, from morphology and composition studies of the nucleus surface, to the motion and trajectories of dust grains, the general structure of the dust coma, the morphology and intensity of jets, gas distribution, mass loss, and dust and gas production rates. Aims: We present the calibration of the raw images taken by OSIRIS and address the accuracy that we can expect in our scientific results based on the accuracy of the calibration steps that we have performed. Methods: We describe the pipeline that has been developed to automatically calibrate the OSIRIS images. Through a series of steps, radiometrically calibrated and distortion corrected images are produced and can be used for scientific studies. Calibration campaigns were run on the ground before launch and throughout the years in flight to determine the parameters that are used to calibrate the images and to verify their evolution with time. We describe how these parameters were determined and we address their accuracy. Results: We provide a guideline to the level of trust that can be put into the various studies performed with OSIRIS images, based on the accuracy of the image calibration.

  19. Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs)

    NASA Astrophysics Data System (ADS)

    McGarry, C. K.; Grattan, M. W. D.; Cosgrove, V. P.

    2007-12-01

    This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min-1 and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at dmax in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min-1 low-dose acquisition mode. Routine EPID QC contrast tolerance (±10) and action (±20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced

  20. Comparison of retinal image quality with spherical and customized aspheric intraocular lenses

    PubMed Central

    Guo, Huanqing; Goncharov, Alexander V.; Dainty, Chris

    2012-01-01

    We hypothesize that an intraocular lens (IOL) with higher-order aspheric surfaces customized for an individual eye provides improved retinal image quality, despite the misalignments that accompany cataract surgery. To test this hypothesis, ray-tracing eye models were used to investigate 10 designs of mono-focal single lens IOLs with rotationally symmetric spherical, aspheric, and customized surfaces. Retinal image quality of pseudo-phakic eyes using these IOLs together with individual variations in ocular and IOL parameters, are evaluated using a Monte Carlo analysis. We conclude that customized lenses should give improved retinal image quality despite the random errors resulting from IOL insertion. PMID:22574257

  1. Testing the quality of images for permanent magnet desktop MRI systems using specially designed phantoms

    NASA Astrophysics Data System (ADS)

    Qiu, Jianfeng; Wang, Guozhu; Min, Jiao; Wang, Xiaoyan; Wang, Pengcheng

    2013-12-01

    Our aim was to measure the performance of desktop magnetic resonance imaging (MRI) systems using specially designed phantoms, by testing imaging parameters and analysing the imaging quality. We designed multifunction phantoms with diameters of 18 and 60 mm for desktop MRI scanners in accordance with the American Association of Physicists in Medicine (AAPM) report no. 28. We scanned the phantoms with three permanent magnet 0.5 T desktop MRI systems, measured the MRI image parameters, and analysed imaging quality by comparing the data with the AAPM criteria and Chinese national standards. Image parameters included: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, signal-to-noise ratio (SNR), and image uniformity. The image parameters of three desktop MRI machines could be measured using our specially designed phantoms, and most parameters were in line with MRI quality control criterion, including: resonance frequency, high contrast spatial resolution, low contrast object detectability, slice thickness, geometrical distortion, image uniformity and slice position accuracy. However, SNR was significantly lower than in some references. The imaging test and quality control are necessary for desktop MRI systems, and should be performed with the applicable phantom and corresponding standards.

  2. Intercomparison of methods for image quality characterization. I. Modulation transfer function

    SciTech Connect

    Samei, Ehsan; Ranger, Nicole T.; Dobbins, James T. III; Chen, Ying

    2006-05-15

    The modulation transfer function (MTF) and the noise power spectrum (NPS) are widely recognized as the most relevant metrics of resolution and noise performance in radiographic imaging. These quantities have commonly been measured using various techniques, the specifics of which can have a bearing on the accuracy of the results. As a part of a study aimed at comparing the relative performance of different techniques, in this paper we report on a comparison of two established MTF measurement techniques: one using a slit test device [Dobbins et al., Med. Phys. 22, 1581-1593 (1995)] and another using a translucent edge test device [Samei et al., Med. Phys. 25, 102-113 (1998)], with one another and with a third technique using an opaque edge test device recommended by a new international standard (IEC 62220-1, 2003). The study further aimed to substantiate the influence of various acquisition and processing parameters on the estimated MTF. The slit test device was made of 2 mm thick Pb slabs with a 12.5 {mu}m opening. The translucent edge test device was made of a laminated and polished Pt{sub 0.9}Ir{sub 0.1} alloy foil of 0.1 mm thickness. The opaque edge test device was made of a 2 mm thick W slab. All test devices were imaged on a representative indirect flat-panel digital radiographic system using three published beam qualities: 70 kV with 0.5 mm Cu filtration, 70 kV with 19 mm Al filtration, and 74 kV with 21 mm Al filtration (IEC-RQA5). The latter technique was also evaluated in conjunction with two external beam-limiting apertures (per IEC 62220-1), and with the tube collimator limiting the beam to the same area achieved with the apertures. The presampled MTFs were deduced from the acquired images by Fourier analysis techniques, and the results analyzed for relative values and the influence of impacting parameters. The findings indicated that the measurement technique has a notable impact on the resulting MTF estimate, with estimates from the overall IEC method

  3. Differential gloss quality scale experiment update: an appearance-based image quality standard initiative (INCITS W1.1)

    NASA Astrophysics Data System (ADS)

    Ng, Yee S.; Kuo, Chunghui; Maggard, Eric; Mashtare, Dale; Morris, Peter; Farnand, Susan

    2007-01-01

    Surface characteristics of a printed sample command a parallel group of visual attributes determining perceived image quality beyond color, and they manifest themselves through various perceived gloss features such as differential gloss, gloss granularity, gloss mottle, etc. Extending from the scope of ISO19799 with limited range of gloss level and printing technologies, the objective of this study is to derive an appearance-based differential gloss quality scale ranging from very low gloss level to very high gloss level composed by various printing technology/substrate combinations. Three psychophysical experiment procedures were proposed including the quality ruler method, pair comparison, and interval scaling with two anchor stimuli, where the pair comparison process was subsequently dropped because of the concern of experiment complexity and data consistency after preliminary trial study. In this paper, we will compare the obtained average quality scale after mapping to the sharpness quality ruler with the average perceived differential gloss via the interval scale. Our numerical analysis indicates a general inverse relationship between the perceived image quality and the gloss variation on an image.

  4. Body image and college women's quality of life: The importance of being self-compassionate.

    PubMed

    Duarte, Cristiana; Ferreira, Cláudia; Trindade, Inês A; Pinto-Gouveia, José

    2015-06-01

    This study explored self-compassion as a mediator between body dissatisfaction, social comparison based on body image and quality of life in 662 female college students. Path analysis revealed that while controlling for body mass index, self-compassion mediated the impact of body dissatisfaction and unfavourable social comparisons on psychological quality of life. The path model accounted for 33 per cent of psychological quality of life variance. Findings highlight the importance of self-compassion as a mechanism that may operate on the association between negative body image evaluations and young women's quality of life.

  5. Application of image quality metamerism to investigate gold color area in cultural property

    NASA Astrophysics Data System (ADS)

    Miyata, Kimiyoshi; Tsumura, Norimichi

    2013-01-01

    A concept of image quality metamerism as an expansion of conventional metamerism defined in color science is introduced, and it is applied to segment similar color areas in a cultural property. The image quality metamerism can unify different image quality attributes based on an index showing the degree of image quality metamerism proposed. As a basic research step, the index is consisted of color and texture information and examined to investigate a cultural property. The property investigated is a pair of folding screen paintings that depict the thriving city of Kyoto designated as a nationally important cultural property in Japan. Gold-colored areas painted by using high granularity colorants compared with other color areas are evaluated based on the image quality metamerism index locally, then the index is visualized as a map showing the possibility of the image quality metamer to the reference pixel set in the same image. This visualization means a segmentation of areas where color is similar but texture is different. The experimental result showed that the proposed method was effective to show areas of gold color areas in the property.

  6. ANALYZING WATER QUALITY WITH IMAGES ACQUIRED FROM AIRBORNE SENSORS

    EPA Science Inventory

    Monitoring different parameters of water quality can be a time consuming and expensive activity. However, the use of airborne light-sensitive (optical) instruments may enhance the abilities of resource managers to monitor water quality in rivers in a timely and cost-effective ma...

  7. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  8. Optimization of image quality in breast tomosynthesis using lumpectomy and mastectomy specimens

    NASA Astrophysics Data System (ADS)

    Timberg, Pontus; Ruschin, Mark; Båth, Magnus; Hemdal, Bengt; Andersson, Ingvar; Svahn, Tony; Mattsson, Sören; Tingberg, Anders

    2007-03-01

    The purpose of this study was to determine how image quality in breast tomosynthesis (BT) is affected when acquisition modes are varied, using human breast specimens containing malignant tumors and/or microcalcifications. Images of thirty-one breast lumpectomy and mastectomy specimens were acquired on a BT prototype based on a Mammomat Novation (Siemens) full-field digital mammography system. BT image acquisitions of the same specimens were performed varying the number of projections, angular range, and detector signal collection mode (binned and nonbinned in the scan direction). An enhanced filtered back projection reconstruction method was applied with constant settings of spectral and slice thickness filters. The quality of these images was evaluated via relative visual grading analysis (VGA) human observer performance experiments using image quality criteria. Results from the relative VGA study indicate that image quality increases with number of projections and angular range. A binned detector collecting mode results in less noise, but reduced resolution of structures. Human breast specimens seem to be suitable for comparing image sets in BT with image quality criteria.

  9. Wave aberration of human eyes and new descriptors of image optical quality and visual performance.

    PubMed

    Lombardo, Marco; Lombardo, Giuseppe

    2010-02-01

    The expansion of wavefront-sensing techniques redefined the meaning of refractive error in clinical ophthalmology. Clinical aberrometers provide detailed measurements of the eye's wavefront aberration. The distribution and contribution of each higher-order aberration to the overall wavefront aberration in the individual eye can now be accurately determined and predicted. Using corneal or ocular wavefront sensors, studies have measured the interindividual and age-related changes in the wavefront aberration in the normal population with the goal of optimizing refractive surgery outcomes for the individual. New objective optical-quality metrics would lead to better use and interpretation of newly available information on aberrations in the eye. However, the first metrics introduced, based on sets of Zernike polynomials, is not completely suitable to depict visual quality because they do not directly relate to the quality of the retinal image. Thus, several approaches to describe the real, complex optical performance of human eyes have been implemented. These include objective metrics that quantify the quality of the optical wavefront in the plane of the pupil (ie, pupil-plane metrics) and others that quantify the quality of the retinal image (ie, image-plane metrics). These metrics are derived by wavefront aberration information from the individual eye. This paper reviews the more recent knowledge of the wavefront aberration in human eyes and discusses the image-quality and optical-quality metrics and predictors that are now routinely calculated by wavefront-sensor software to describe the optical and image quality in the individual eye.

  10. Survey of mammography practice in Croatia: equipment performance, image quality and dose.

    PubMed

    Faj, Dario; Posedel, Dario; Stimac, Damir; Ivezic, Zdravko; Kasabasic, Mladen; Ivkovic, Ana; Kubelka, Dragan; Ilakovac, Vesna; Brnic, Zoran; Bjelac, Olivera Ciraj

    2008-01-01

    A national audit of mammography equipment performance, image quality and dose has been conducted in Croatia. Film-processing parameters, optical density (OD), average glandular dose (AGD) to the standard breast, viewing conditions and image quality were examined using TOR(MAM) test object. Average film gradient ranged from 2.6 to 3.7, with a mean of 3.1. Tube voltage used for imaging of the standard 45 mm polymethylmethacrylate phantom ranged from 24 to 34 kV, and OD ranged from 0.75 to 1.94 with a mean of 1.26. AGD to the standard breast ranged from 0.4 to 2.3 mGy with a mean of 1.1 mGy. Besides clinical conditions, the authors have imaged the standard phantom in the referent conditions with 28 kV and OD as close as possible to 1.5. Then, AGD ranged from 0.5 to 2.6 mGy with a mean of 1.3 mGy. Image viewing conditions were generally unsatisfying with ambient light up to 500 lx and most of the viewing boxes with luminance between 1000 and 2000 cd per m(2). TOR(MAM) scoring of images taken in clinical and referent conditions was done by local radiologists in local image viewing conditions and by the referent radiologist in good image viewing conditions. Importance of OD and image viewing conditions for diagnostic information were analysed. The survey showed that the main problem in Croatia is the lack of written quality assurance/quality control (QA/QC) procedures. Consequently, equipment performance, image quality and dose are unstable and activities to improve image quality or to reduce the dose are not evidence-based. This survey also had an educational purpose, introducing in Croatia the QC based on European Commission Guidelines.

  11. Three-dimensional volumetric display of CT data: effect of scan parameters upon image quality.

    PubMed

    Ney, D R; Fishman, E K; Magid, D; Robertson, D D; Kawashima, A

    1991-01-01

    Of the many steps involved in producing high quality three-dimensional (3D) images of CT data, the data acquisition step is of greatest consequence. The principle of "garbage in, garbage out" applies to 3D imaging--bad scanning technique produces equally bad 3D images. We present a formal study of the effect of two basic scanning parameters, slice thickness and slice spacing, on image quality. Three standard test objects were studied using variable CT scanning parameters. The objects chosen were a bone phantom, a cadaver femur with a simulated 5 mm fracture gap, and a cadaver femur with a simulated 1 mm fracture gap. Each object was scanned at three collimations: 8, 4, and 2 mm. For each collimation, four sets of scans were performed using four slice intervals: 8, 4, 3, and 2 mm. The bone phantom was scanned in two positions: oriented perpendicular to the scanning plane and oriented 45 degrees from the scanning plane. Three-dimensional images of the resulting 48 sets of data were produced using volumetric rendering. Blind review of the resultant 48 data sets was performed by three reviewers rating five factors for each image. The images resulting from scans with thin collimation and small table increments proved to rate the highest in all areas. The data obtained using 2 mm slice intervals proved to rate the highest in perceived image quality. Three millimeter slice spacing with 4 mm collimation, which clinically provides a good compromise between image quality and acquisition time and dose, also produced good perceived image quality. The studies with 8 mm slice intervals provided the least detail and introduced the worst inaccuracies and artifacts and were not suitable for clinical use. Statistical analysis demonstrated that slice interval (i.e., table incrementation) was of primary importance and slice collimation was of secondary, although significant, importance in determining perceived 3D image quality.

  12. Do SE(II) electrons really degrade SEM image quality?

    PubMed

    Bernstein, Gary H; Carter, Andrew D; Joy, David C

    2013-01-01

    Generally, in scanning electron microscopy (SEM) imaging, it is desirable that a high-resolution image be composed mainly of those secondary electrons (SEs) generated by the primary electron beam, denoted SE(I) . However, in conventional SEM imaging, other, often unwanted, signal components consisting of backscattered electrons (BSEs), and their associated SEs, denoted SE(II) , are present; these signal components contribute a random background signal that degrades contrast, and therefore signal-to-noise ratio and resolution. Ideally, the highest resolution SEM image would consist only of the SE(I) component. In SEMs that use conventional pinhole lenses and their associated Everhart-Thornley detectors, the image is composed of several components, including SE(I) , SE(II) , and some BSE, depending on the geometry of the detector. Modern snorkel lens systems eliminate the BSEs, but not the SE(II) s. We present a microfabricated diaphragm for minimizing the unwanted SE(II) signal components. We present evidence of improved imaging using a microlithographically generated pattern of Au, about 500 nm thick, that blocks most of the undesired signal components, leaving an image composed mostly of SE(I) s. We refer to this structure as a "spatial backscatter diaphragm."

  13. TU-F-9A-01: Balancing Image Quality and Dose in Radiography

    SciTech Connect

    Peck, D; Pasciak, A

    2014-06-15

    Emphasis is often placed on minimizing radiation dose in diagnostic imaging without a complete consideration of the effect on image quality, especially those that affect diagnostic accuracy. This session will include a patient image-based review of diagnostic quantities important to radiologists in conventional radiography, including the effects of body habitus, age, positioning, and the clinical indication of the exam. The relationships between image quality, radiation dose, and radiation risk will be discussed, specifically addressing how these factors are affected by image protocols and acquisition parameters and techniques. This session will also discuss some of the actual and perceived radiation risk associated with diagnostic imaging. Regardless if the probability for radiation-induced cancer is small, the fear associated with radiation persists. Also when a risk has a benefit to an individual or to society, the risk may be justified with respect to the benefit. But how do you convey the risks and the benefits to people? This requires knowledge of how people perceive risk and how to communicate the risk and the benefit to different populations. In this presentation the sources of errors in estimating risk from radiation and some methods used to convey risks are reviewed. Learning Objectives: Understand the image quality metrics that are clinically relevant to radiologists. Understand how acquisition parameters and techniques affect image quality and radiation dose in conventional radiology. Understand the uncertainties in estimates of radiation risk from imaging exams. Learn some methods for effectively communicating radiation risk to the public.

  14. Image quality improvement in megavoltage cone beam CT using an imaging beam line and a sintered pixelated array system

    SciTech Connect

    Breitbach, Elizabeth K.; Maltz, Jonathan S.; Gangadharan, Bijumon; Bani-Hashemi, Ali; Anderson, Carryn M.; Bhatia, Sudershan K.; Stiles, Jared; Edwards, Drake S.; Flynn, Ryan T.

    2011-11-15

    Purpose: To quantify the improvement in megavoltage cone beam computed tomography (MVCBCT) image quality enabled by the combination of a 4.2 MV imaging beam line (IBL) with a carbon electron target and a detector system equipped with a novel sintered pixelated array (SPA) of translucent Gd{sub 2}O{sub 2}S ceramic scintillator. Clinical MVCBCT images are traditionally acquired with the same 6 MV treatment beam line (TBL) that is used for cancer treatment, a standard amorphous Si (a-Si) flat panel imager, and the Kodak Lanex Fast-B (LFB) scintillator. The IBL produces a greater fluence of keV-range photons than the TBL, to which the detector response is more optimal, and the SPA is a more efficient scintillator than the LFB. Methods: A prototype IBL + SPA system was installed on a Siemens Oncor linear accelerator equipped with the MVision{sup TM} image guided radiation therapy (IGRT) system. A SPA strip consisting of four neighboring tiles and measuring 40 cm by 10.96 cm in the crossplane and inplane directions, respectively, was installed in the flat panel imager. Head- and pelvis-sized phantom images were acquired at doses ranging from 3 to 60 cGy with three MVCBCT configurations: TBL + LFB, IBL + LFB, and IBL + SPA. Phantom image quality at each dose was quantified using the contrast-to-noise ratio (CNR) and modulation transfer function (MTF) metrics. Head and neck, thoracic, and pelvic (prostate) cancer patients were imaged with the three imaging system configurations at multiple doses ranging from 3 to 15 cGy. The systems were assessed qualitatively from the patient image data. Results: For head and neck and pelvis-sized phantom images, imaging doses of 3 cGy or greater, and relative electron densities of 1.09 and 1.48, the CNR average improvement factors for imaging system change of TBL + LFB to IBL + LFB, IBL + LFB to IBL + SPA, and TBL + LFB to IBL + SPA were 1.63 (p < 10{sup -8}), 1.64 (p < 10{sup -13}), 2.66 (p < 10{sup -9}), respectively. For all imaging

  15. A compliant-mechanism approach to achieving specific quality of motion in a lumbar total disc replacement

    PubMed Central

    Halverson, Peter A.; Bowden, Anton E.; Howell, Larry L.

    2012-01-01

    Background The current generation of total disc replacements achieves excellent short- and medium-term results by focusing on restoring the quantity of motion. Recent studies indicate that additional concerns (helical axes of motion, segmental torque-rotation behavior) may have important implications in the health of adjacent segments as well as the health of the surrounding tissue of the operative level. The objective of this article is to outline the development, validation, and biomechanical performance of a novel, compliant-mechanism total disc replacement that addresses these concerns by including them as essential design criteria. Methods Compliant-mechanism design techniques were used to design a total disc replacement capable of replicating the moment-rotation response and the location and path of the helical axis of motion. A prototype was evaluated with the use of bench-top testing and single-level cadaveric experiments in flexion-extension, lateral bending, and axial torsion. Results Bench-top testing confirmed that the moment-rotation response of the disc replacement matched the intended design behavior. Cadaveric testing confirmed that the moment-rotation and displacement response of the implanted segment mimicked those of the healthy spinal segment. Conclusions Incorporation of segmental quality of motion into the foundational stages of the design process resulted in a total disc replacement design that provides torque-rotation and helical axis–of–motion characteristics to the adjacent segments and the operative-level facets that are similar to those observed in healthy spinal segments. PMID:25694875

  16. Whole-body CT in polytrauma patients: The effect of arm position on abdominal image quality when using a human phantom

    NASA Astrophysics Data System (ADS)

    Jeon, Pil-Hyun; Kim, Hee-Joung; Lee, Chang-Lae; Kim, Dae-Hong; Lee, Won-Hyung; Jeon, Sung-Su

    2012-06-01

    For a considerable number of emergency computed tomography (CT) scans, patients are unable to position their arms above their head due to traumatic injuries. The arms-down position has been shown to reduce image quality with beam-hardening artifacts in the dorsal regions of the liver, spleen, and kidneys, rendering these images non-diagnostic. The purpose of this study was to evaluate the effect of arm position on the image quality in patients undergoing whole-body CT. We acquired CT scans with various acquisition parameters at voltages of 80, 120, and 140 kVp and an increasing tube current from 200 to 400 mAs in 50 mAs increments. The image noise and the contrast assessment were considered for quantitative analyses of the CT images. The image noise (IN), the contrast-to-noise ratio (CNR), the signal-to-noise ratio (SNR), and the coefficient of variation (COV) were evaluated. Quantitative analyses of the experiments were performed with CT scans representative of five different arm positions. Results of the CT scans acquired at 120 kVp and 250 mAs showed high image quality in patients with both arms raised above the head (SNR: 12.4, CNR: 10.9, and COV: 8.1) and both arms flexed at the elbows on the chest (SNR: 11.5, CNR: 10.2, and COV: 8.8) while the image quality significantly decreased with both arms in the down position (SNR: 9.1, CNR: 7.6, and COV: 11). Both arms raised, one arm raised, and both arms flexed improved the image quality compared to arms in the down position by reducing beam-hardening and streak artifacts caused by the arms being at the side of body. This study provides optimal methods for achieving higher image quality and lower noise in abdominal CT for trauma patients.

  17. Nondestructive spectroscopic and imaging techniques for quality evaluation and assessment of fish and fish products.

    PubMed

    He, Hong-Ju; Wu, Di; Sun, Da-Wen

    2015-01-01

    Nowadays, people have increasingly realized the importance of acquiring high quality and nutritional values of fish and fish products in their daily diet. Quality evaluation and assessment are always expected and conducted by using rapid and nondestructive methods in order to satisfy both producers and consumers. During the past two decades, spectroscopic and imaging techniques have been developed to nondestructively estimate and measure quality attributes of fish and fish products. Among these noninvasive methods, visible/near-infrared (VIS/NIR) spectroscopy, computer/machine vision, and hyperspectral imaging have been regarded as powerful and effective analytical tools for fish quality analysis and control. VIS/NIR spectroscopy has been widely applied to determine intrinsic quality characteristics of fish samples, such as moisture, protein, fat, and salt. Computer/machine vision on the other hand mainly focuses on the estimation of external features like color, weight, size, and surface defects. Recently, by incorporating both spectroscopy and imaging techniques in one system, hyperspectral imaging cannot only measure the contents of different quality attributes simultaneously, but also obtain the spatial distribution of such attributes when the quality of fish samples are evaluated and measured. This paper systematically reviews the research advances of these three nondestructive optical techniques in the application of fish quality evaluation and determination and discuss future trends in the developments of nondestructive technologies for further quality characterization in fish and fish products.

  18. Effect of shaped filter design on dose and image quality in breast CT.

    PubMed

    Lück, Ferdinand; Kolditz, Daniel; Hupfer, Martin; Kalender, Willi A

    2013-06-21

    homogeneous scatter distribution were reached which led to reduced cupping artefacts. The simulations with one shaped filter at variable source-to-filter distance resulted in nearly homogeneous noise distributions and comparable dose reduction for all breast diameters. In conclusion, by means of shaped filters designed for breast CT, significant dose reduction can be achieved at unimpaired image quality. One shaped filter designed for the largest breast diameter used with variable source-to-filter distance appears to be the best solution for breast CT. PMID:23715466

  19. Effect of shaped filter design on dose and image quality in breast CT

    NASA Astrophysics Data System (ADS)

    Lück, Ferdinand; Kolditz, Daniel; Hupfer, Martin; Kalender, Willi A.

    2013-06-01

    homogeneous scatter distribution were reached which led to reduced cupping artefacts. The simulations with one shaped filter at variable source-to-filter distance resulted in nearly homogeneous noise distributions and comparable dose reduction for all breast diameters. In conclusion, by means of shaped filters designed for breast CT, significant dose reduction can be achieved at unimpaired image quality. One shaped filter designed for the largest breast diameter used with variable source-to-filter distance appears to be the best solution for breast CT.

  20. Methodology for Quantitative Characterization of Fluorophore Photoswitching to Predict Superresolution Microscopy Image Quality.

    PubMed

    Bittel, Amy M; Nickerson, Andrew; Saldivar, Isaac S; Dolman, Nick J; Nan, Xiaolin; Gibbs, Summer L

    2016-01-01

    Single-molecule localization microscopy (SMLM) image quality and resolution strongly depend on the photoswitching properties of fluorophores used for sample labeling. Development of fluorophores with optimized photoswitching will considerably improve SMLM spatial and spectral resolution. Currently, evaluating fluorophore photoswitching requires protein-conjugation before assessment mandating specific fluorophore functionality, which is a major hurdle for systematic