Science.gov

Sample records for achievable image quality

  1. Subpixel shift with Fourier transform to achieve efficient and high-quality image interpolation

    NASA Astrophysics Data System (ADS)

    Chen, Qin-Sheng; Weinhous, Martin S.

    1999-05-01

    A new approach to image interpolation is proposed. Different from the conventional scheme, the interpolation of a digital image is achieved with a sub-unity coordinate shift technique. In the approach, the original image is first shifted by sub-unity distances matching the locations where the image values need to be restored. The original and the shifted images are then interspersed together, yielding an interpolated image. High quality sub-unity image shift which is crucial to the approach is accomplished by implementing the shift theorem of Fourier transformation. It is well known that under the Nyquist sampling criterion, the most accurate image interpolation can be achieved with the interpolating function (sinc function). A major drawback is its computation efficiency. The present approach can achieve an interpolation quality as good as that with the sinc function since the sub-unity shift in Fourier domain is equivalent to shifting the sinc function in spatial domain, while the efficiency, thanks to the fast Fourier transform, is very much improved. In comparison to the conventional interpolation techniques such as linear or cubic B-spline interpolation, the interpolation accuracy is significantly enhanced. In order to compensate for the under-sampling effects in the interpolation of 3D medical images owing to a larger inter-slice distance, proper window functions were recommended. The application of the approach to 2- and 3-D CT and MRI images produced satisfactory interpolation results.

  2. Achieving Quality in Cardiovascular Imaging II: proceedings from the Second American College of Cardiology -- Duke University Medical Center Think Tank on Quality in Cardiovascular Imaging.

    PubMed

    Douglas, Pamela S; Chen, Jersey; Gillam, Linda; Hendel, Robert; Hundley, W Gregory; Masoudi, Frederick; Patel, Manesh R; Peterson, Eric

    2009-02-01

    Despite rapid technologic advances and sustained growth, less attention has been focused on quality in imaging than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met in the second Quality in Cardiovascular Imaging Think Tank. The participants endorsed the previous consensus definition of quality in imaging and proposed quality measures. Additional areas of needed effort included data standardization and structured reporting, appropriateness criteria, imaging registries, laboratory accreditation, partnership development, and imaging research. The second American College of Cardiology-Duke University Think Tank continued the process of the development, dissemination, and adoption of quality improvement initiatives for all cardiovascular imaging modalities.

  3. Achieving Quality in Occupational Health

    NASA Technical Reports Server (NTRS)

    O'Donnell, Michele (Editor); Hoffler, G. Wyckliffe (Editor)

    1997-01-01

    The conference convened approximately 100 registered participants of invited guest speakers, NASA presenters, and a broad spectrum of the Occupational Health disciplines representing NASA Headquarters and all NASA Field Centers. Centered on the theme, "Achieving Quality in Occupational Health," conferees heard presentations from award winning occupational health program professionals within the Agency and from private industry; updates on ISO 9000 status, quality assurance, and information technologies; workshops on ergonomics and respiratory protection; an overview from the newly commissioned NASA Occupational Health Assessment Team; and a keynote speech on improving women's health. In addition, NASA occupational health specialists presented 24 poster sessions and oral deliveries on various aspects of current practice at their field centers.

  4. Gifted Student Academic Achievement and Program Quality

    ERIC Educational Resources Information Center

    Jordan, Katrina Ann Woolsey

    2010-01-01

    Gifted academic achievement has been identified as a major area of interest for educational researchers. The purpose of this study was to ascertain whether there was a relation between the quality of gifted programs as perceived by teachers, coordinators and supervisors of the gifted and the achievement of the same gifted students in 6th and 7th…

  5. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced

  6. Perceived image quality assessment for color images on mobile displays

    NASA Astrophysics Data System (ADS)

    Jang, Hyesung; Kim, Choon-Woo

    2015-01-01

    With increase in size and resolution of mobile displays and advances in embedded processors for image enhancement, perceived quality of images on mobile displays has been drastically improved. This paper presents a quantitative method to evaluate perceived image quality of color images on mobile displays. Three image quality attributes, colorfulness, contrast and brightness, are chosen to represent perceived image quality. Image quality assessment models are constructed based on results of human visual experiments. In this paper, three phase human visual experiments are designed to achieve credible outcomes while reducing time and resources needed for visual experiments. Values of parameters of image quality assessment models are estimated based on results from human visual experiments. Performances of different image quality assessment models are compared.

  7. Achieving the Quality Difference: Making Customers Count

    DTIC Science & Technology

    1989-06-02

    the Conference. Welcome to the Conference Wednesday, May 31, 1989 Frank Hodsoll "The challenges are great, but -o are the creativity and dedication of...framework which will support this achievement. Mr. Hodsoll stressed that many challenges lie ahead in the pursuit of Total Quality; many changes w"".1 have to...problems faced by the organization are with these people, and it is important to involve them. His experience has also shown that by making oneself

  8. Achieving quality assurance through clinical audit.

    PubMed

    Patel, Seraphim

    2010-06-01

    Audit is a crucial component of improvements to the quality of patient care. Clinical audits are undertaken to help ensure that patients can be given safe, reliable and dignified care, and to encourage them to self-direct their recovery. Such audits are undertaken also to help reduce lengths of patient stay in hospital, readmission rates and delays in discharge. This article describes the stages of clinical audit and the support required to achieve organisational core values.

  9. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. Achievements and challenges of EUV mask imaging

    NASA Astrophysics Data System (ADS)

    Davydova, Natalia; van Setten, Eelco; de Kruif, Robert; Connolly, Brid; Fukugami, Norihito; Kodera, Yutaka; Morimoto, Hiroaki; Sakata, Yo; Kotani, Jun; Kondo, Shinpei; Imoto, Tomohiro; Rolff, Haiko; Ullrich, Albrecht; Lammers, Ad; Schiffelers, Guido; van Dijk, Joep

    2014-07-01

    The impact of various mask parameters on CDU combined in a total mask budget is presented, for 22 nm lines, for reticles used for NXE:3300 qualification. Apart from the standard mask CD measurements, actinic spectrometry of multilayer is used to qualify reflectance uniformity over the image field; advanced 3D metrology is applied for absorber profile characterization including absorber height and side wall angle. The predicted mask impact on CDU is verified using actual exposure data collected on multiple NXE:3300 scanners. Mask 3D effects are addressed, manifesting themselves in best focus shifts for different structures exposed with off-axis illumination. Experimental NXE:3300 results for 16 nm dense lines and 20 nm (semi-)isolated spaces are shown: best focus range reaches 24 nm. A mitigation strategy by absorber height optimization is proposed based on experimental results of a special mask with varying absorber heights. Further development of a black image border for EUV mask is considered. The image border is a pattern free area surrounding image field preventing exposure the image field neighborhood on wafer. Normal EUV absorber is not suitable for this purpose as it has 1-3% EUV reflectance. A current solution is etching of ML down to substrate reducing EUV reflectance to <0.05%. A next step in the development of the black border is the reduction of DUV Out-of-Band reflectance (<1.5%) in order to cope with DUV light present in EUV scanners. Promising results achieved in this direction are shown.

  11. Automatic no-reference image quality assessment.

    PubMed

    Li, Hongjun; Hu, Wei; Xu, Zi-Neng

    2016-01-01

    No-reference image quality assessment aims to predict the visual quality of distorted images without examining the original image as a reference. Most no-reference image quality metrics which have been already proposed are designed for one or a set of predefined specific distortion types and are unlikely to generalize for evaluating images degraded with other types of distortion. There is a strong need of no-reference image quality assessment methods which are applicable to various distortions. In this paper, the authors proposed a no-reference image quality assessment method based on a natural image statistic model in the wavelet transform domain. A generalized Gaussian density model is employed to summarize the marginal distribution of wavelet coefficients of the test images, so that correlative parameters are needed for the evaluation of image quality. The proposed algorithm is tested on three large-scale benchmark databases. Experimental results demonstrate that the proposed algorithm is easy to implement and computational efficient. Furthermore, our method can be applied to many well-known types of image distortions, and achieves a good quality of prediction performance.

  12. Basic Principles and Concepts for Achieving Quality

    DTIC Science & Technology

    2007-12-01

    Conceptual Framework for Quality 10 2.1 Definitions and Concepts for Quality 10 2.1.1 Object (Entity) 10 2.1.2 Process 11 2.1.3 Requirement 12 2.1.4 User...between Extended Quality Conceptual Framework and Development Project 9 Figure 2: Customer and End User are the Same in Interactions With Organization 13...5: Interaction of Activities between Extended Quality Conceptual Framework and Development Project 21 Figure 6: Software Module Volatility 22 4

  13. Achieving Quality Learning in Higher Education.

    ERIC Educational Resources Information Center

    Nightingale, Peggy; O'Neil, Mike

    This volume on quality learning in higher education discusses issues of good practice particularly action learning and Total Quality Management (TQM)-type strategies and illustrates them with seven case studies in Australia and the United Kingdom. Chapter 1 discusses issues and problems in defining quality in higher education. Chapter 2 looks at…

  14. [Achieving quality goals for bodies of water].

    PubMed

    Cencetti, Corrado; Guidi, Massimo; Martinelli, Angiolo; Patrizi, Giuseppe

    2005-01-01

    Target of this paper is to draw the relationship between environmental factors and some impacts due to human activity, in order to outline environmental quality restoring strategies for water bodies, which include among result indicators also biological parameters expected for Italian regulation and European directives. Morphologic equilibrium and correct knowledge of processes regulating fluvial dynamic, as basic factor of ecosystem functionality condition, are highlighted. Statistic evaluation processes of water quality data and implementation and validation of mathematical models are described.

  15. Achieving Product Qualities Through Software Architecture Practices

    DTIC Science & Technology

    2016-06-14

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...University page 9 Business Goals High quality Quick time to market Effective use of limited resources Product alignment Low cost production Low cost... time , build time , design time System: user interface, platform, environment, system that interoperates with target system © 2004 by Carnegie Mellon

  16. Image Enhancement, Image Quality, and Noise

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2005-01-01

    The Multiscale Retinex With Color Restoration (MSRCR) is a non-linear image enhancement algorithm that provides simultaneous dynamic range compression, color constancy and rendition. The overall impact is to brighten up areas of poor contrast/lightness but not at the expense of saturating areas of good contrast/brightness. The downside is that with the poor signal-to-noise ratio that most image acquisition devices have in dark regions, noise can also be greatly enhanced thus affecting overall image quality. In this paper, we will discuss the impact of the MSRCR on the overall quality of an enhanced image as a function of the strength of shadows in an image, and as a function of the root-mean-square (RMS) signal-to-noise (SNR) ratio of the image.

  17. Collaborative networks: helping rural laboratories achieve quality.

    PubMed

    Hassell, Lewis A; Fogler, Martha W; Russell, Sonia E

    2006-01-01

    Rural hospital laboratories can combine some of their significant advantages with the benefits of an egalitarian, collaborative network to create a setting in which the disadvantages of their size (such as limited skill sets and resources) may be more readily overcome. Participation in a knowledge-sharing network, a coordinated effort at reference-range establishment and validation, and development of quality and safety algorithms help them avoid many potentially costly problems. This article describes the experience of developing such a network over a 30-year period and illustrates the benefits of this practice.

  18. Higher Education Quality Assessment Model: Towards Achieving Educational Quality Standard

    ERIC Educational Resources Information Center

    Noaman, Amin Y.; Ragab, Abdul Hamid M.; Madbouly, Ayman I.; Khedra, Ahmed M.; Fayoumi, Ayman G.

    2017-01-01

    This paper presents a developed higher education quality assessment model (HEQAM) that can be applied for enhancement of university services. This is because there is no universal unified quality standard model that can be used to assess the quality criteria of higher education institutes. The analytical hierarchy process is used to identify the…

  19. Video and image quality

    NASA Astrophysics Data System (ADS)

    Aldridge, Jim

    1995-09-01

    This paper presents some of the results of a UK government research program into methods of improving the effectiveness of CCTV surveillance systems. The paper identifies the major components of video security systems and primary causes of unsatisfactory images. A method is outline for relating the picture detail limitations imposed by each system component on overall system performance. The paper also points out some possible difficulties arising from the use of emerging new technology.

  20. School Quality and the Black-White Achievement Gap

    ERIC Educational Resources Information Center

    Hanushek, Eric A.; Rivkin, Steven G.

    2007-01-01

    Substantial uncertainty exists about the impact of school quality on the black-white achievement gap. Our results, based on both Texas Schools Project (TSP) administrative data and the Early Childhood Longitudinal Survey (ECLS), differ noticeably from other recent analyses of the black-white achievement gap by providing strong evidence that…

  1. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  2. Perceptual Quality Assessment of Screen Content Images.

    PubMed

    Yang, Huan; Fang, Yuming; Lin, Weisi

    2015-11-01

    Research on screen content images (SCIs) becomes important as they are increasingly used in multi-device communication applications. In this paper, we present a study on perceptual quality assessment of distorted SCIs subjectively and objectively. We construct a large-scale screen image quality assessment database (SIQAD) consisting of 20 source and 980 distorted SCIs. In order to get the subjective quality scores and investigate, which part (text or picture) contributes more to the overall visual quality, the single stimulus methodology with 11 point numerical scale is employed to obtain three kinds of subjective scores corresponding to the entire, textual, and pictorial regions, respectively. According to the analysis of subjective data, we propose a weighting strategy to account for the correlation among these three kinds of subjective scores. Furthermore, we design an objective metric to measure the visual quality of distorted SCIs by considering the visual difference of textual and pictorial regions. The experimental results demonstrate that the proposed SCI perceptual quality assessment scheme, consisting of the objective metric and the weighting strategy, can achieve better performance than 11 state-of-the-art IQA methods. To the best of our knowledge, the SIQAD is the first large-scale database published for quality evaluation of SCIs, and this research is the first attempt to explore the perceptual quality assessment of distorted SCIs.

  3. Searching for the limit of image quality in film radiography

    SciTech Connect

    Vaessen, B.; Perdieus, P.; Florens, R.

    1993-12-31

    Radiographic film image quality in general was, and in most cases still is, considered as a very subjective and rather vague parameter. Yet it is of vital importance to the NDT and related quality control and quality assurance industry. Therefore, lately Agfa has put a major effort into quantifying image quality in an objective, measurable way. It was in the framework of this optimization project, that the authors, based on these new insights in imaging of industrial film systems, strived to search for the limit of the highest achievable image quality. In this paper they report these results. They not only report these results in an academic way, meaning how this highest image quality can be achieved under lab conditions, but also how these same results can be obtained under practical e.g. field-conditions.

  4. Landsat image data quality studies

    NASA Technical Reports Server (NTRS)

    Schueler, C. F.; Salomonson, V. V.

    1985-01-01

    Preliminary results of the Landsat-4 Image Data Quality Analysis (LIDQA) program to characterize the data obtained using the Thematic Mapper (TM) instrument on board the Landsat-4 and Landsat-5 satellites are reported. TM design specifications were compared to the obtained data with respect to four criteria, including spatial resolution; geometric fidelity; information content; and image relativity to Multispectral Scanner (MSS) data. The overall performance of the TM was rated excellent despite minor instabilities and radiometric anomalies in the data. Spatial performance of the TM exceeded design specifications in terms of both image sharpness and geometric accuracy, and the image utility of the TM data was at least twice as high as MSS data. The separability of alfalfa and sugar beet fields in a TM image is demonstrated.

  5. Mission-driven evaluation of imaging system quality

    NASA Astrophysics Data System (ADS)

    Kattnig, Alain Philippe; Ferhani, Ouamar; Primot, Jéro‸Me

    2001-12-01

    Image-quality criteria are usually intended to achieve the best possible image at a given sampling rate, which is ill-suited to applications where the detection of well-defined geometric and radiometric properties of scenes or objects are paramount. The quality criterion developed here for designing observation systems is based on properties of the objects to be viewed. It is thus an object-oriented imaging quality criterion rather than an image-oriented one. We also propose to go beyond optimization and calibrate a numerical scale that can be used to rate the quality of the service delivered by any observation system.

  6. Teacher Quality and Student Achievement. Urban Diversity Series.

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Anthony, Emily

    Recent research suggests that teacher quality is the most important educational input predicting student achievement. Nonetheless, many teachers are less academically skilled than college graduates in other occupations. This study explores characteristics of highly qualified teachers and the connections that exist between these attributes and…

  7. Principal Quality, ISLLC Standards, and Student Achievement: A Virginia Study

    ERIC Educational Resources Information Center

    Owings, William A.; Kaplan, Leslie S.; Nunnery, John

    2005-01-01

    A significant relationship exists between principals' quality at certain grade levels and student achievement on the Virginia Standards of Learning tests. A statewide study finds principals rated higher on school leadership as measured by an Interstate School Leadership Licensure Consortium (ISLLC) Standards rubric. These schools have higher…

  8. [A method of iris image quality evaluation].

    PubMed

    Murat, Hamit; Mao, Dawei; Tong, Qinye

    2006-04-01

    Iris image quality evaluation plays a very important part in iris computer recognition. An iris image quality evaluation method was introduced into this study to distinguish good image from bad image caused by pupil distortion, blurred boundary, two circles appearing not concentric, and severe occlusion by eyelids and eyelashes. The tests based on this method gave good results.

  9. Quantitative image quality evaluation for cardiac CT reconstructions

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Wu; Fan, Jiahua; Kupinski, Matthew A.; Balhorn, William; Okerlund, Darin R.

    2016-03-01

    Maintaining image quality in the presence of motion is always desirable and challenging in clinical Cardiac CT imaging. Different image-reconstruction algorithms are available on current commercial CT systems that attempt to achieve this goal. It is widely accepted that image-quality assessment should be task-based and involve specific tasks, observers, and associated figures of merits. In this work, we developed an observer model that performed the task of estimating the percentage of plaque in a vessel from CT images. We compared task performance of Cardiac CT image data reconstructed using a conventional FBP reconstruction algorithm and the SnapShot Freeze (SSF) algorithm, each at default and optimal reconstruction cardiac phases. The purpose of this work is to design an approach for quantitative image-quality evaluation of temporal resolution for Cardiac CT systems. To simulate heart motion, a moving coronary type phantom synchronized with an ECG signal was used. Three different percentage plaques embedded in a 3 mm vessel phantom were imaged multiple times under motion free, 60 bpm, and 80 bpm heart rates. Static (motion free) images of this phantom were taken as reference images for image template generation. Independent ROIs from the 60 bpm and 80 bpm images were generated by vessel tracking. The observer performed estimation tasks using these ROIs. Ensemble mean square error (EMSE) was used as the figure of merit. Results suggest that the quality of SSF images is superior to the quality of FBP images in higher heart-rate scans.

  10. Cost Implications in Achieving Alternative Water Quality Targets

    NASA Astrophysics Data System (ADS)

    Schleich, Joachim; White, David; Stephenson, Kurt

    1996-04-01

    Excessive nutrient loading poses significant water quality problems in many water bodies across the country. An important question that must be addressed when nutrient reduction policies are devised is where nutrient reduction targets will be applied within the watershed. This paper examines the cost implications of establishing three possible nutrient reduction targets in different locations along the Fox-Wolf River basin in northeast Wisconsin. A linear programming model calculates the total cost of achieving a 50% phosphorus load reduction target established in various locations throughout the basin. Two strategies establish phosphorus reduction targets for each of the 41 subwatersheds, and the third approach establishes a single 50% target reduction at Green Bay for the entire watershed. The results indicate that achieving target phosphorus reductions at the subwatershed level is over 4 times more expensive than achieving the same percentage phosphorus reduction for the watershed as a whole.

  11. Achieving quality education for minorities in mathematics, science, and engineering

    NASA Astrophysics Data System (ADS)

    McBay, Shirley M.; Davidson, Laura-Lee

    1993-09-01

    The QEM Network was established to serve as a focal point for the implementation of strategies designed to achieve the six goals and 58 recommendations of the QEM Project report, Education That Works: An Action Plan for the Education of Minorities. In Education That Works, we lay out a vision of a restructured education system that would ensure quality education for, and sustained educational achievement by, minority Americans. We discuss what some of the obstacles are that stand in the way of educational equity in our nation and why it is crucial from both moral and practical perspectives to overcome these obstacles. In a second publication, Together We Can Make It Work: A National Agenda to Provide Quality Education for Minorities in Mathematics, Science, and Engineering, issued in April 1992 by QEM's MSE Network, we lay out a plan for achieving specific goals for minorities in mathematics, science, and engineering. Achieving the goals of Education That Works and of Together We Can Make It Work requires that the current K-12 education system be totally restructured. The system we have in place produces only an educational elite that does not include significant numbers of minority students.

  12. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  13. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  14. Image quality assessment for CT used on small animals

    NASA Astrophysics Data System (ADS)

    Cisneros, Isabela Paredes; Agulles-Pedrós, Luis

    2016-07-01

    Image acquisition on a CT scanner is nowadays necessary in almost any kind of medical study. Its purpose, to produce anatomical images with the best achievable quality, implies the highest diagnostic radiation exposure to patients. Image quality can be measured quantitatively based on parameters such as noise, uniformity and resolution. This measure allows the determination of optimal parameters of operation for the scanner in order to get the best diagnostic image. A human Phillips CT scanner is the first one minded for veterinary-use exclusively in Colombia. The aim of this study was to measure the CT image quality parameters using an acrylic phantom and then, using the computational tool MatLab, determine these parameters as a function of current value and window of visualization, in order to reduce dose delivery by keeping the appropriate image quality.

  15. Infrared image quality evaluation method without reference image

    NASA Astrophysics Data System (ADS)

    Yue, Song; Ren, Tingting; Wang, Chengsheng; Lei, Bo; Zhang, Zhijie

    2013-09-01

    Since infrared image quality depends on many factors such as optical performance and electrical noise of thermal imager, image quality evaluation becomes an important issue which can conduce to both image processing afterward and capability improving of thermal imager. There are two ways of infrared image quality evaluation, with or without reference image. For real-time thermal image, the method without reference image is preferred because it is difficult to get a standard image. Although there are various kinds of methods for evaluation, there is no general metric for image quality evaluation. This paper introduces a novel method to evaluate infrared image without reference image from five aspects: noise, clarity, information volume and levels, information in frequency domain and the capability of automatic target recognition. Generally, the basic image quality is obtained from the first four aspects, and the quality of target is acquired from the last aspect. The proposed method is tested on several infrared images captured by different thermal imagers. Calculate the indicators and compare with human vision results. The evaluation shows that this method successfully describes the characteristics of infrared image and the result is consistent with human vision system.

  16. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  17. Likelihood of achieving air quality targets under model uncertainties.

    PubMed

    Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W

    2011-01-01

    Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.

  18. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  19. Referenceless image quality evaluation for whole slide imaging

    PubMed Central

    Hashimoto, Noriaki; Bautista, Pinky A.; Yamaguchi, Masahiro; Ohyama, Nagaaki; Yagi, Yukako

    2012-01-01

    Objective: The image quality in whole slide imaging (WSI) is one of the most important issues for the practical use of WSI scanners. In this paper, we proposed an image quality evaluation method for scanned slide images in which no reference image is required. Methods: While most of the conventional methods for no-reference evaluation only deal with one image degradation at a time, the proposed method is capable of assessing both blur and noise by using an evaluation index which is calculated using the sharpness and noise information of the images in a given training data set by linear regression analysis. The linear regression coefficients can be determined in two ways depending on the purpose of the evaluation. For objective quality evaluation, the coefficients are determined using a reference image with mean square error as the objective value in the analysis. On the other hand, for subjective quality evaluation, the subjective scores given by human observers are used as the objective values in the analysis. The predictive linear regression models for the objective and subjective image quality evaluations, which were constructed using training images, were then used on test data wherein the calculated objective values are construed as the evaluation indices. Results: The results of our experiments confirmed the effectiveness of the proposed image quality evaluation method in both objective and subjective image quality measurements. Finally, we demonstrated the application of the proposed evaluation method to the WSI image quality assessment and automatic rescanning in the WSI scanner. PMID:22530177

  20. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  1. Image quality, compression and segmentation in medicine.

    PubMed

    Morgan, Pam; Frankish, Clive

    2002-12-01

    This review considers image quality in the context of the evolving technology of image compression, and the effects image compression has on perceived quality. The concepts of lossless, perceptually lossless, and diagnostically lossless but lossy compression are described, as well as the possibility of segmented images, combining lossy compression with perceptually lossless regions of interest. The different requirements for diagnostic and training images are also discussed. The lack of established methods for image quality evaluation is highlighted and available methods discussed in the light of the information that may be inferred from them. Confounding variables are also identified. Areas requiring further research are illustrated, including differences in perceptual quality requirements for different image modalities, image regions, diagnostic subtleties, and tasks. It is argued that existing tools for measuring image quality need to be refined and new methods developed. The ultimate aim should be the development of standards for image quality evaluation which take into consideration both the task requirements of the images and the acceptability of the images to the users.

  2. Cardiac catheterization laboratory imaging quality assurance program.

    PubMed

    Wondrow, M A; Laskey, W K; Hildner, F J; Cusma, J; Holmes, D R

    2001-01-01

    With the recent approval of the National Electrical Manufacturers Association (NEMA) standard for "Characteristics of and Test Procedures for a Phantom to Benchmark Cardiac Fluoroscopic and Photographic Performance," comprehensive cardiac image assurance control programs are now possible. This standard was developed by a joint NEMA/Society for Cardiac Angiography and Interventions (SCA&I) working group of imaging manufacturers and cardiology society professionals over the past 4 years. This article details a cardiac catheterization laboratory image quality assurance and control program that includes the new standard along with current regulatory requirements for cardiac imaging. Because of the recent proliferation of digital imaging equipment, quality assurance for cardiac imaging fluoroscopy and digital imaging are critical. Included are the previous works recommended by the American College of Cardiology (ACC) and American Heart Association (AHA), Society for Cardiac Angiographers and Interventions (SCA&I), and authors of previous image quality subjects.

  3. Retinal image quality assessment using generic features

    NASA Astrophysics Data System (ADS)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  4. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    PubMed

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  5. Toward optimal color image quality of television display

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.; Endrikhovski, Sergej N.; Bech, Soren; Jensen, Kaj

    1999-12-01

    A general framework and first experimental results are presented for the `OPTimal IMage Appearance' (OPTIMA) project, which aims to develop a computational model for achieving optimal color appearance of natural images on adaptive CRT television displays. To achieve this goal we considered the perceptual constraints determining quality of displayed images and how they could be quantified. The practical value of the notion of optimal image appearance was translated from the high level of the perceptual constraints into a method for setting the display's parameters at the physical level. In general, the whole framework of quality determination includes: (1) evaluation of perceived quality; (2) evaluation of the individual perceptual attributes; and (3) correlation between the physical measurements, psychometric parameters and the subjective responses. We performed a series of psychophysical experiments, with observers viewing a series of color images on a high-end consumer television display, to investigate the relationships between Overall Image Quality and four quality-related attributes: Brightness Rendering, Chromatic Rendering, Visibility of Details and Overall Naturalness. The results of the experiments presented in this paper suggest that these attributes are highly inter-correlated.

  6. Improving mental health outcomes: achieving equity through quality improvement

    PubMed Central

    Poots, Alan J.; Green, Stuart A.; Honeybourne, Emmi; Green, John; Woodcock, Thomas; Barnes, Ruth; Bell, Derek

    2014-01-01

    Objective To investigate equity of patient outcomes in a psychological therapy service, following increased access achieved by a quality improvement (QI) initiative. Design Retrospective service evaluation of health outcomes; data analysed by ANOVA, chi-squared and Statistical Process Control. Setting A psychological therapy service in Westminster, London, UK. Participants People living in the Borough of Westminster, London, attending the service (from either healthcare professional or self-referral) between February 2009 and May 2012. Intervention(s) Social marketing interventions were used to increase referrals, including the promotion of the service through local media and through existing social networks. Main Outcome Measure(s) (i) Severity of depression on entry using Patient Health Questionnaire-9 (PHQ9). (ii) Changes to severity of depression following treatment (ΔPHQ9). (iii) Changes in attainment of a meaningful improvement in condition assessed by a key performance indicator. Results Patients from areas of high deprivation entered the service with more severe depression (M = 15.47, SD = 6.75), compared with patients from areas of low (M = 13.20, SD = 6.75) and medium (M = 14.44, SD = 6.64) deprivation. Patients in low, medium and high deprivation areas attained similar changes in depression score (ΔPHQ9: M = −6.60, SD = 6.41). Similar proportions of patients achieved the key performance indicator across initiative phase and deprivation categories. Conclusions QI methods improved access to mental health services; this paper finds no evidence for differences in clinical outcomes in patients, regardless of level of deprivation, interpreted as no evidence of inequity in the service with respect to this outcome. PMID:24521701

  7. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  8. Electrical Inspection Oriented Thermal Image Quality Assessment

    NASA Astrophysics Data System (ADS)

    Lin, Ying; Wang, Menglin; Gong, Xiaojin; Guo, Zhihong; Geng, Yujie; Bai, Demeng

    2017-01-01

    This paper presents an approach to access the quality of thermal images that are specially used in electrical inspection. In this application, no reference images are given for quality assessment. Therefore, we first analyze the characteristics for these thermal images. Then, four quantitative measurements, which are one-dimensional (1D) entropy, two-dimensional (2D) entropy, centrality, and No-Reference Structural Sharpness (NRSS), are investigated to measure the information content, the centrality for objects of interest, and the sharpness of images. Moreover, in order to provide a more intuitive measure for human operators, we assign each image with a discrete rate based on these quantitative measurements via the k-nearest neighbor (KNN) method. The proposed approach has been validated in a dataset composed of 2,336 images. Experiments show that our quality assessment results are consistent with subjective assessment.

  9. Image quality improvement of polygon computer generated holography.

    PubMed

    Pang, Xiao-Ning; Chen, Ding-Chen; Ding, Yi-Cong; Chen, Yi-Gui; Jiang, Shao-Ji; Dong, Jian-Wen

    2015-07-27

    Quality of holographic reconstruction image is seriously affected by undesirable messy fringes in polygon-based computer generated holography. Here, several methods have been proposed to improve the image quality, including a modified encoding method based on spatial-domain Fraunhofer diffraction and a specific LED light source. Fast Fourier transform is applied to the basic element of polygon and fringe-invisible reconstruction is achieved after introducing initial random phase. Furthermore, we find that the image with satisfactory fidelity and sharp edge can be reconstructed by either a LED with moderate coherence level or a modulator with small pixel pitch. Satisfactory image quality without obvious speckle noise is observed under the illumination of bandpass-filter-aided LED. The experimental results are consistent well with the correlation analysis on the acceptable viewing angle and the coherence length of the light source.

  10. Quality assurance in dental radiography: intra-oral image quality analysis.

    PubMed

    Bolas, Andrew; Fitzgerald, Maurice

    With the introduction of criteria for clinical audit by the Irish Dental Council, and the statutory requirement on dentists to introduce this into their practice, this article will introduce the basic concepts of quality standards in intra-oral radiography and the subsequent application of these standards in an image quality audit cycle. Subjective image quality analysis is not a new concept, but its application can prove beneficial to both patient and dental practitioner. The ALARA (as low as reasonably achievable) principle is fundamental in radiation protection, and therefore the prevention of repeat exposures demonstrates one facet of this that the dental practitioner can employ within daily practice.

  11. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  12. Image Quality Ranking Method for Microscopy

    NASA Astrophysics Data System (ADS)

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-07-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics.

  13. Does High School Facility Quality Affect Student Achievement? A Two-Level Hierarchical Linear Model

    ERIC Educational Resources Information Center

    Bowers, Alex J.; Urick, Angela

    2011-01-01

    The purpose of this study is to isolate the independent effects of high school facility quality on student achievement using a large, nationally representative U.S. database of student achievement and school facility quality. Prior research on linking school facility quality to student achievement has been mixed. Studies that relate overall…

  14. Slider-adjusted softcopy ruler for calibrated image quality assessment

    NASA Astrophysics Data System (ADS)

    Jin, Elaine W.; Keelan, Brian W.

    2010-01-01

    ISO 20462 part 3 standardized the hardcopy quality ruler and a softcopy quality ruler based on a binary sort approach involving paired comparisons. The new softcopy ruler method described here utilizes a slider bar to match the quality of the ruler to that of the test image, which is found to substantially reduce the time required per assessment (30 to 15.5 s), with only a modest loss of precision (standard deviations of 2.5 to 2.9 just noticeable differences). In combination, these metrics implied a 20% improvement in the standard error of the mean achievable in a fixed amount of judging time. Ruler images calibrated against the standard quality scale of ISO 20462 are generated for 21 scenes, at 31 quality levels each, achieved through variation of sharpness, while other attributes are held near their preferred positions. The images are bundled with documentation and a MATLAB source code for a graphical user interface that administers softcopy ruler experiments, and these materials are donated to the International Imaging Industry Association for distribution. In conjunction with a specified large flat panel display, these materials should enable users to conduct softcopy quality ruler experiments with minimum effort, and should reduce the barriers to performing calibrated psychophysical measurements.

  15. Profiling Sensitivity to Image Quality.

    DTIC Science & Technology

    1981-10-01

    results were used to derive minimum resolution thresholds for photogram- metric compilation. DEVELOPGROUND CONTROL DFTB BJISE ...... ’- XPCR1M FN7 FORM...fusion or electronic correlation of the images. Referring to Table 1, it is apparent that those occurrences were stereomodels whose photos contained... electronic /video equipment to correlate stereo images. The data indicates that below a threshold level, fusion/ electronic correlation is not possible

  16. No training blind image quality assessment

    NASA Astrophysics Data System (ADS)

    Chu, Ying; Mou, Xuanqin; Ji, Zhen

    2014-03-01

    State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further predict the quality scores of the testing images. However, these methods need complicated training and learning, and the evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics without training and learning are firstly proposed. The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the dictionary. Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly acceptable.

  17. How healthcare organizations use the Internet to market quality achievements.

    PubMed

    Revere, Lee; Robinson, Leroy

    2010-01-01

    The increasingly competitive environment is having a strong bearing on the strategic marketing practices of hospitals. The Internet is a fairly new marketing tool, and it has the potential to dramatically influence healthcare consumers. This exploratory study investigates how hospitals use the Internet as a tool to market the quality of their services. Significant evidence exists that customers use the Internet to find information about potential healthcare providers, including information concerning quality. Data were collected from a random sample of 45 U.S. hospitals from the American Hospital Association database. The data included hospital affiliation, number of staffed beds, accreditation status, Joint Commission quality awards, and number of competing hospitals. The study's findings show that system-affiliated hospitals do not provide more, or less, quality information on their websites than do non-system-affiliated hospitals. The findings suggest that the amount of quality information provided on a hospital website is not dependent on hospital size. Research provides evidence that hospitals with more Joint Commission awards promote their quality accomplishments more so than their counterparts that earned fewer Joint Commission awards. The findings also suggest that the more competitors in a marketplace the more likely a hospital is to promote its quality as a potential differential advantage. The study's findings indicate that a necessary element of any hospital's competitive strategy should be to include the marketing of its quality on the organization's website.

  18. Retinal image quality assessment based on image clarity and content

    NASA Astrophysics Data System (ADS)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  19. Peripheral Aberrations and Image Quality for Contact Lens Correction

    PubMed Central

    Shen, Jie; Thibos, Larry N.

    2011-01-01

    Purpose Contact lenses reduced the degree of hyperopic field curvature present in myopic eyes and rigid contact lenses reduced sphero-cylindrical image blur on the peripheral retina, but their effect on higher order aberrations and overall optical quality of the eye in the peripheral visual field is still unknown. The purpose of our study was to evaluate peripheral wavefront aberrations and image quality across the visual field before and after contact lens correction. Methods A commercial Hartmann-Shack aberrometer was used to measure ocular wavefront errors in 5° steps out to 30° of eccentricity along the horizontal meridian in uncorrected eyes and when the same eyes are corrected with soft or rigid contact lenses. Wavefront aberrations and image quality were determined for the full elliptical pupil encountered in off-axis measurements. Results Ocular higher-order aberrations increase away from fovea in the uncorrected eye. Third-order aberrations are larger and increase faster with eccentricity compared to the other higher-order aberrations. Contact lenses increase all higher-order aberrations except 3rd-order Zernike terms. Nevertheless, a net increase in image quality across the horizontal visual field for objects located at the foveal far point is achieved with rigid lenses, whereas soft contact lenses reduce image quality. Conclusions Second order aberrations limit image quality more than higher-order aberrations in the periphery. Although second-order aberrations are reduced by contact lenses, the resulting gain in image quality is partially offset by increased amounts of higher-order aberrations. To fully realize the benefits of correcting higher-order aberrations in the peripheral field requires improved correction of second-order aberrations as well. PMID:21873925

  20. Improvement of image quality in holographic microscopy.

    PubMed

    Budhiraja, C J; Som, S C

    1981-05-15

    A novel technique of noise reduction in holographic microscopy has been experimentally studied. It has been shown that significant improvement in the holomicroscopic images of actual low-contrast continuous tone biological objects can be achieved without trade off in image resolution. The technique makes use of holographically produced multidirectional phase gratings used as diffusers and the continuous addition of subchannel holograms. It has been shown that the self-imaging property of this type of diffuser makes the use of these diffusers ideal for microscopic objects. Experimental results have also been presented to demonstrate real-time image processing capability of this technique.

  1. Sparse feature fidelity for perceptual image quality assessment.

    PubMed

    Chang, Hua-Wen; Yang, Hua; Gan, Yong; Wang, Ming-Hui

    2013-10-01

    The prediction of an image quality metric (IQM) should be consistent with subjective human evaluation. As the human visual system (HVS) is critical to visual perception, modeling of the HVS is regarded as the most suitable way to achieve perceptual quality predictions. Sparse coding that is equivalent to independent component analysis (ICA) can provide a very good description of the receptive fields of simple cells in the primary visual cortex, which is the most important part of the HVS. With this inspiration, a quality metric called sparse feature fidelity (SFF) is proposed for full-reference image quality assessment (IQA) on the basis of transformation of images into sparse representations in the primary visual cortex. The proposed method is based on the sparse features that are acquired by a feature detector, which is trained on samples of natural images by an ICA algorithm. In addition, two strategies are designed to simulate the properties of the visual perception: 1) visual attention and 2) visual threshold. The computation of SFF has two stages: training and fidelity computation, in addition, the fidelity computation consists of two components: feature similarity and luminance correlation. The feature similarity measures the structure differences between the two images, whereas the luminance correlation evaluates brightness distortions. SFF also reflects the chromatic properties of the HVS, and it is very effective for color IQA. The experimental results on five image databases show that SFF has a better performance in matching subjective ratings compared with the leading IQMs.

  2. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  3. What Does Quality Programming Mean for High Achieving Students?

    ERIC Educational Resources Information Center

    Samudzi, Cleo

    2008-01-01

    The Missouri Academy of Science, Mathematics and Computing (Missouri Academy) is a two-year accelerated, early-entrance-to-college, residential school that matches the level, complexity and pace of the curriculum with the readiness and motivation of high achieving high school students. The school is a part of Northwest Missouri State University…

  4. Quality and Early Field Experiences: Partnering with Junior Achievement

    ERIC Educational Resources Information Center

    Piro, Jody S.; Anderson, Gina; Fredrickson, Rebecca

    2015-01-01

    This study explored the perceptions of preservice teacher candidates who participated in a pilot partnership between a public teacher education preparation program and Junior Achievement (JA). The partnership was grounded in the premise that providing early field experiences to preservice teacher candidates was a necessary requirement of quality…

  5. Teacher Turnover, Teacher Quality, and Student Achievement in DCPS

    ERIC Educational Resources Information Center

    Adnot, Melinda; Dee, Thomas; Katz, Veronica; Wyckoff, James

    2017-01-01

    In practice, teacher turnover appears to have negative effects on school quality as measured by student performance. However, some simulations suggest that turnover can instead have large positive effects under a policy regime in which low-performing teachers can be accurately identified and replaced with more effective teachers. This study…

  6. Achieving Data Quality within the Logistics Modernization Program

    DTIC Science & Technology

    2012-09-01

    Modernization Program LOGSA Logistics Support Activity BOM Bills of Material AIS Automated Information Systems GAO Government...the scope of this study to audits of bills of material ( BOM ) data. A. PURPOSE OF RESEARCH The author’s purpose in this study was to determine the...13 G. BILLS OF MATERIAL DATA QUALITY AUDIT PROCESS DESCRIPTION

  7. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  8. Subjective matters: from image quality to image psychology

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; De Ridder, Huib

    2013-03-01

    From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.

  9. Quantification of image quality using information theory.

    PubMed

    Niimi, Takanaga; Maeda, Hisatoshi; Ikeda, Mitsuru; Imai, Kuniharu

    2011-12-01

    Aims of present study were to examine usefulness of information theory in visual assessment of image quality. We applied first order approximation of the Shannon's information theory to compute information losses (IL). Images of a contrast-detail mammography (CDMAM) phantom were acquired with computed radiographies for various radiation doses. Information content was defined as the entropy Σp( i )log(1/p ( i )), in which detection probabilities p ( i ) were calculated from distribution of detection rate of the CDMAM. IL was defined as the difference between information content and information obtained. IL decreased with increases in the disk diameters (P < 0.0001, ANOVA) and in the radiation doses (P < 0.002, F-test). Sums of IL, which we call total information losses (TIL), were closely correlated with the image quality figures (r = 0.985). TIL was dependent on the distribution of image reading ability of each examinee, even when average reading ratio was the same in the group. TIL was shown to be sensitive to the observers' distribution of image readings and was expected to improve the evaluation of image quality.

  10. Geometric assessment of image quality using digital image registration techniques

    NASA Technical Reports Server (NTRS)

    Tisdale, G. E.

    1976-01-01

    Image registration techniques were developed to perform a geometric quality assessment of multispectral and multitemporal image pairs. Based upon LANDSAT tapes, accuracies to a small fraction of a pixel were demonstrated. Because it is insensitive to the choice of registration areas, the technique is well suited to performance in an automatic system. It may be implemented at megapixel-per-second rates using a commercial minicomputer in combination with a special purpose digital preprocessor.

  11. Image quality measures and their performance

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.; Chen, Si-Yuan

    1994-01-01

    A number of quality measures are evaluated for gray scale image compression. They are all bivariate exploiting the differences between corresponding pixels in the original and degraded images. It is shown that although some numerical measures correlate well with the observers' response for a given compression technique, they are not reliable for an evaluation across different techniques. The two graphical measures (histograms and Hosaka plots), however, can be used to appropriately specify not only the amount, but also the type of degradation in reconstructed images.

  12. New algorithm for the passive THz image quality enhancement

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2016-04-01

    We propose a new approach for THz image quality enhancing using correlation function between the image under consideration and a standard image. The standard image moves in two directions along a image under analysis. As a result, 2 D correlation function is obtained. Multiplying this function by color number belonging to a grey scale, we restore the image under the analysis. This allows to suppress a noise on a new image. This method allows to see the person clothes details that it means multi-times increasing of the passive THz camera temperature resolution. We discuss a choice of standard image characteristics for an achievement of correlation function for high contrast. Other feature of our approach arises from a possibility of a person image coming to the THz camera by using a computer processing of the image only. It means that we can "decrease" a distance between a person and the passive THz camera. This algorithm is very convenient for using and has a high performance.

  13. Image Quality Analysis of Various Gastrointestinal Endoscopes: Why Image Quality Is a Prerequisite for Proper Diagnostic and Therapeutic Endoscopy

    PubMed Central

    Ko, Weon Jin; An, Pyeong; Ko, Kwang Hyun; Hahm, Ki Baik; Hong, Sung Pyo

    2015-01-01

    Arising from human curiosity in terms of the desire to look within the human body, endoscopy has undergone significant advances in modern medicine. Direct visualization of the gastrointestinal (GI) tract by traditional endoscopy was first introduced over 50 years ago, after which fairly rapid advancement from rigid esophagogastric scopes to flexible scopes and high definition videoscopes has occurred. In an effort towards early detection of precancerous lesions in the GI tract, several high-technology imaging scopes have been developed, including narrow band imaging, autofocus imaging, magnified endoscopy, and confocal microendoscopy. However, these modern developments have resulted in fundamental imaging technology being skewed towards red-green-blue and this technology has obscured the advantages of other endoscope techniques. In this review article, we have described the importance of image quality analysis using a survey to consider the diversity of endoscope system selection in order to better achieve diagnostic and therapeutic goals. The ultimate aims can be achieved through the adoption of modern endoscopy systems that obtain high image quality. PMID:26473119

  14. Does resolution really increase image quality?

    NASA Astrophysics Data System (ADS)

    Tisse, Christel-Loïc; Guichard, Frédéric; Cao, Frédéric

    2008-02-01

    A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels) while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies (1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio (SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel technologies.

  15. Image Quality in Analog and Digital Microtechniques.

    ERIC Educational Resources Information Center

    White, William

    1991-01-01

    Discusses the basic principles of the application of microfilm (analog) and electronic (digital) technologies for data storage. Image quality is examined, searching and retrieval capabilities are considered, and hardcopy output resolution is described. It is concluded that microfilm is still the preferred archival medium. (5 references) (LRW)

  16. Image Quality Indicator for Infrared Inspections

    NASA Technical Reports Server (NTRS)

    Burke, Eric

    2011-01-01

    The quality of images generated during an infrared thermal inspection depends on many system variables, settings, and parameters to include the focal length setting of the IR camera lens. If any relevant parameter is incorrect or sub-optimal, the resulting IR images will usually exhibit inherent unsharpness and lack of resolution. Traditional reference standards and image quality indicators (IQIs) are made of representative hardware samples and contain representative flaws of concern. These standards are used to verify that representative flaws can be detected with the current IR system settings. However, these traditional standards do not enable the operator to quantify the quality limitations of the resulting images, i.e. determine the inherent maximum image sensitivity and image resolution. As a result, the operator does not have the ability to optimize the IR inspection system prior to data acquisition. The innovative IQI described here eliminates this limitation and enables the operator to objectively quantify and optimize the relevant variables of the IR inspection system, resulting in enhanced image quality with consistency and repeatability in the inspection application. The IR IQI consists of various copper foil features of known sizes that are printed on a dielectric non-conductive board. The significant difference in thermal conductivity between the two materials ensures that each appears with a distinct grayscale or brightness in the resulting IR image. Therefore, the IR image of the IQI exhibits high contrast between the copper features and the underlying dielectric board, which is required to detect the edges of the various copper features. The copper features consist of individual elements of various shapes and sizes, or of element-pairs of known shapes and sizes and with known spacing between the elements creating the pair. For example, filled copper circles with various diameters can be used as individual elements to quantify the image sensitivity

  17. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  18. Naturalness and interestingness of test images for visual quality evaluation

    NASA Astrophysics Data System (ADS)

    Halonen, Raisa; Westman, Stina; Oittinen, Pirkko

    2011-01-01

    Balanced and representative test images are needed to study perceived visual quality in various application domains. This study investigates naturalness and interestingness as image quality attributes in the context of test images. Taking a top-down approach we aim to find the dimensions which constitute naturalness and interestingness in test images and the relationship between these high-level quality attributes. We compare existing collections of test images (e.g. Sony sRGB images, ISO 12640 images, Kodak images, Nokia images and test images developed within our group) in an experiment combining quality sorting and structured interviews. Based on the data gathered we analyze the viewer-supplied criteria for naturalness and interestingness across image types, quality levels and judges. This study advances our understanding of subjective image quality criteria and enables the validation of current test images, furthering their development.

  19. Physical measures of image quality in mammography

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.

    1996-04-01

    A recently introduced method for quantitative analysis of images of the American College of Radiology (ACR) mammography accreditation phantom has been extended to include signal- to-noise-ratio (SNR) measurements, and has been applied to survey the image quality of 54 mammography machines from 17 hospitals. Participants sent us phantom images to be evaluated for each mammography machine at their hospital. Each phantom was loaned to us for obtaining images of the wax insert plate on a reference machine at our institution. The images were digitized and analyzed to yield indices that quantified the image quality of the machines precisely. We have developed methods for normalizing for the variation of the individual speck sizes between different ACR phantoms, for the variation of the speck sizes within a microcalcification group, and for variations in overall speeds of the mammography systems. In terms of the microcalcification SNR, the variability of the x-ray machines was 40.5% when no allowance was made for phantom or mAs variations. This dropped to 17.1% when phantom variability was accounted for, and to 12.7% when mAs variability was also allowed for. Our work shows the feasibility of practical, low-cost, objective and accurate evaluations, as a useful adjunct to the present ACR method.

  20. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  1. Image registration for DSA quality enhancement.

    PubMed

    Buzug, T M; Weese, J

    1998-01-01

    A generalized framework for histogram-based similarity measures is presented and applied to the image-enhancement task in digital subtraction angiography (DSA). The class of differentiable, strictly convex weighting functions is identified as suitable weightings of histograms for measuring the degree of clustering that goes along with registration. With respect to computation time, the energy similarity measure is the function of choice for the registration of mask and contrast image prior to subtraction. The robustness of the energy measure is studied for geometrical image distortions like rotation and scaling. Additionally, it is investigated how the histogram binning and inhomogeneous motion inside the templates influence the quality of the similarity measure. Finally, the registration success for the automated procedure is compared with the manually shift-corrected image pair of the head.

  2. Blind image quality assessment via deep learning.

    PubMed

    Hou, Weilong; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-06-01

    This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.

  3. Quality in university physics teaching: is it being achieved?

    NASA Astrophysics Data System (ADS)

    1998-11-01

    This was the title of a Physics Discipline Workshop held at the University of Leeds on 10 and 11 September 1998. Organizer Ashley Clarke of the university's Physics and Astronomy Department collected together an interesting variety of speakers polygonically targeting the topic, although as workshops go the audience didn't have to do much work except listen. There were representatives from 27 university physics departments who must have gone away with a lot to think about and possibly some new academic year resolutions to keep. But as a non-university no-longer teacher of (school) physics I was impressed with the general commitment to the idea that if you get the right quality of learning the teaching must be OK. I also learned (but have since forgotten) a lot of new acronyms. The keynote talk was by Gillian Hayes, Associate Director of the Quality Assurance Agency for Higher Education (QAA). She explained the role and implementation of the Subject Reviews that QAA is making for all subjects in all institutions of higher education on a five- to seven-year cycle. Physics Education hopes to publish an article about all this from QAA shortly. In the meantime, suffice it to say that the review looks at six aspects of provision, essentially from the point of view of enhancing students' experiences and learning. No doubt all participants would agree with this (they'd better if they want to score well on the Review) but may have been more worried by the next QAA speaker, Norman Jackson, who drummed in the basic facts of life as HE moves from an elite provision system to a mass provision system. He had an interesting graph showing how in the last ten years or so more students were getting firsts and upper seconds and fewer getting thirds. It seems that all those A-level students getting better grades than they used to are carrying on their good luck to degree level. But they still can't do maths (allegedly) and I doubt whether Jon Ogborn (IoP Advancing Physics Project

  4. Quality After-School Programming and Its Relationship to Achievement-Related Behaviors and Academic Performance

    ERIC Educational Resources Information Center

    Grassi, Annemarie M.

    2012-01-01

    The purpose of this study is to understand the relationship between quality social support networks developed through high quality afterschool programming and achievement amongst middle school and high school aged youth. This study seeks to develop a deeper understanding of how quality after-school programs influence a youth's developmental…

  5. Does Teacher Quality Affect Student Achievement? An Empirical Study in Indonesia

    ERIC Educational Resources Information Center

    Sirait, Swando

    2016-01-01

    The objective of this study is to examine the relationship between teacher qualities in relation to student achievement in Indonesia. Teacher quality in this study defines as teacher evaluation score, in the areas of professional and pedagogic competency. The result of this study consonant to previous study that teacher quality, in term of teacher…

  6. Retinal image quality in the rodent eye.

    PubMed

    Artal, P; Herreros de Tejada, P; Muñoz Tedó, C; Green, D G

    1998-01-01

    Many rodents do not see well. For a target to be resolved by a rat or a mouse, it must subtend a visual angle of a degree or more. It is commonly assumed that this poor spatial resolving capacity is due to neural rather than optical limitations, but the quality of the retinal image has not been well characterized in these animals. We have modified a double-pass apparatus, initially designed for the human eye, so it could be used with rodents to measure the modulation transfer function (MTF) of the eye's optics. That is, the double-pass retinal image of a monochromatic (lambda = 632.8 nm) point source was digitized with a CCD camera. From these double-pass measurements, the single-pass MTF was computed under a variety of conditions of focus and with different pupil sizes. Even with the eye in best focus, the image quality in both rats and mice is exceedingly poor. With a 1-mm pupil, for example, the MTF in the rat had an upper limit of about 2.5 cycles/deg, rather than the 28 cycles/deg one would obtain if the eye were a diffraction-limited system. These images are about 10 times worse than the comparable retinal images in the human eye. Using our measurements of the optics and the published behavioral and electrophysiological contrast sensitivity functions (CSFs) of rats, we have calculated the CSF that the rat would have if it had perfect rather than poor optics. We find, interestingly, that diffraction-limited optics would produce only slight improvement overall. That is, in spite of retinal images which are of very low quality, the upper limit of visual resolution in rodents is neurally determined. Rats and mice seem to have eyes in which the optics and retina/brain are well matched.

  7. Dried fruits quality assessment by hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe

    2012-05-01

    Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.

  8. Fluorescent microthermal imaging-theory and methodology for achieving high thermal resolution images

    SciTech Connect

    Barton, D.L.; Tangyunyong, P.

    1995-09-01

    The fluorescent microthermal imaging technique (FMI) involves coating a sample surface with an inorganic-based thin film that, upon exposure to UV light, emits temperature-dependent fluorescence. FMI offers the ability to create thermal maps of integrated circuits with a thermal resolution theoretically limited to 1 m{degrees}C and a spatial resolution which is diffraction-limited to 0.3 {mu}m. Even though the fluorescent microthermal imaging (FMI) technique has been around for more than a decade, many factors that can significantly affect the thermal image quality have not been systematically studied and characterized. After a brief review of FMI theory, we will present our recent results demonstrating for the first time three important factors that have a dramatic impact on the thermal quality and sensitivity of FMI. First, the limitations imparted by photon shot noise and improvement in the signal-to-noise ratio realized through signal averaging will be discussed. Second, ultraviolet bleaching, an unavoidable problem with FMI as it currently is performed, will be characterized to identify ways to minimize its effect. Finally, the impact of film dilution on thermal sensitivity will be discussed.

  9. Image quality vs. sensitivity: fundamental sensor system engineering

    NASA Astrophysics Data System (ADS)

    Schueler, Carl F.

    2008-08-01

    This paper focuses on the fundamental system engineering tradeoff driving almost all remote sensing design efforts, affecting complexity, cost, performance, schedule, and risk: image quality vs. sensitivity. This single trade encompasses every aspect of performance, including radiometric accuracy, dynamic range and precision, as well as spatial, spectral, and temporal coverage and resolution. This single trade also encompasses every aspect of design, including mass, dimensions, power, orbit selection, spacecraft interface, sensor and spacecraft functional trades, pointing or scanning architecture, sensor architecture (e.g., field-of-view, optical form, aperture, f/#, material properties), electronics, mechanical and thermal properties. The relationship between image quality and sensitivity is introduced based on the concepts of modulation transfer function (MTF) and signal-to-noise ratio (SNR) with examples to illustrate the balance to be achieved by the system architect to optimize cost, complexity, performance and risk relative to end-user requirements.

  10. Subjective experience of image quality: attributes, definitions, and decision making of subjective image quality

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Radun, Jenni; Virtanen, Toni; Halonen, Raisa; Nyman, Göte

    2009-01-01

    Subjective quality rating does not reflect the properties of the image directly, but it is the outcome of a quality decision making process, which includes quantification of subjective quality experience. Such a rich subjective content is often ignored. We conducted two experiments (with 28 and 20 observers), in order to study the effect of paper grade on image quality experience of the ink-jet prints. Image quality experience was studied using a grouping task and a quality rating task. Both tasks included an interview, but in the latter task we examined the relations of different subjective attributes in this experience. We found out that the observers use an attribute hierarchy, where the high-level attributes are more experiential, general and abstract, while low-level attributes are more detailed and concrete. This may reflect the hierarchy of the human visual system. We also noticed that while the observers show variable subjective criteria for IQ, the reliability of average subjective estimates is high: when two different observer groups estimated the same images in the two experiments, correlations between the mean ratings were between .986 and .994, depending on the image content.

  11. Visual pattern degradation based image quality assessment

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Li, Leida; Shi, Guangming; Lin, Weisi; Wan, Wenfei

    2015-08-01

    In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.

  12. Paediatric cerebrovascular CT angiography—towards better image quality

    PubMed Central

    Thust, Stefanie C.; Chong, Wui Khean Kling; Gunny, Roxana; Mazumder, Asif; Poitelea, Marius; Welsh, Anna; Ederies, Ash

    2014-01-01

    Background Paediatric cerebrovascular CT angiography (CTA) can be challenging to perform due to variable cardiovascular physiology between different age groups and the risk of movement artefact. This analysis aimed to determine what proportion of CTA at our institution was of diagnostic quality and identify technical factors which could be improved. Materials and methods a retrospective analysis of 20 cases was performed at a national paediatric neurovascular centre assessing image quality with a subjective scoring system and Hounsfield Unit (HU) measurements. Demographic data, contrast dose, flow rate and triggering times were recorded for each patient. Results Using a qualitative scoring system, 75% of studies were found to be of diagnostic quality (n=9 ‘good’, n=6 ‘satisfactory’) and 25% (n=5) were ‘poor’. Those judged subjectively to be poor had arterial contrast density measured at less than 250 HU. Increased arterial opacification was achieved for cases performed with an increased flow rate (2.5-4 mL/s) and higher intravenous contrast dose (2 mL/kg). Triggering was found to be well timed in nine cases, early in four cases and late in seven cases. Of the scans triggered early, 75% were poor. Of the scans triggered late, less (29%) were poor. Conclusions High flow rates (>2.5 mL/s) were a key factor for achieving high quality paediatric cerebrovascular CTA imaging. However, appropriate triggering by starting the scan immediately on contrast opacification of the monitoring vessel plays an important role and could maintain image quality when flow rates were lower. Early triggering appeared more detrimental than late. PMID:25525579

  13. Storage phosphor radiography of wrist fractures: a subjective comparison of image quality at varying exposure levels.

    PubMed

    Peer, Regina; Lanser, Anton; Giacomuzzi, Salvatore M; Pechlaner, Sigurd; Künzel, Heinz; Bodner, Gerd; Gaber, O; Jaschke, Werner; Peer, Siegfried

    2002-06-01

    Image quality of storage phosphor radiographs acquired at different exposure levels was compared to define the minimal radiation dose needed to achieve images which allow for reliable detection of wrist fractures. In a study on 33 fractured anatomical wrist specimens image quality of storage phosphor radiographs was assessed on a diagnostic PACS workstation by three observers. Images were acquired at exposure levels corresponding to a speed classes 100, 200, 400 and 800. Cortical bone surface, trabecular bone, soft tissues and fracture delineation were judged on a subjective basis. Image quality was rated according to a standard protocol and statistical evaluation was performed based on an analysis of variance (ANOVA). Images at a dose reduction of 37% were rated sufficient quality without loss in diagnostic accuracy. Sufficient trabecular and cortical bone presentation was still achieved at a dose reduction of 62%. The latter images, however, were considered unacceptable for fracture detection. To achieve high-quality storage phosphor radiographs, which allow for a reliable evaluation of wrist fractures, a minimum exposure dose equivalent to a speed class of 200 is needed. For general-purpose skeletal radiography, however, a dose reduction of up to 62% can be achieved. A choice of exposure settings according to the clinical situation (ALARA principle) is recommended to achieve possible dose reductions.

  14. Reconstruction algorithm for improved ultrasound image quality.

    PubMed

    Madore, Bruno; Meral, F Can

    2012-02-01

    A new algorithm is proposed for reconstructing raw RF data into ultrasound images. Previous delay-and-sum beamforming reconstruction algorithms are essentially one-dimensional, because a sum is performed across all receiving elements. In contrast, the present approach is two-dimensional, potentially allowing any time point from any receiving element to contribute to any pixel location. Computer-intensive matrix inversions are performed once, in advance, to create a reconstruction matrix that can be reused indefinitely for a given probe and imaging geometry. Individual images are generated through a single matrix multiplication with the raw RF data, without any need for separate envelope detection or gridding steps. Raw RF data sets were acquired using a commercially available digital ultrasound engine for three imaging geometries: a 64-element array with a rectangular field-of- view (FOV), the same probe with a sector-shaped FOV, and a 128-element array with rectangular FOV. The acquired data were reconstructed using our proposed method and a delay- and-sum beamforming algorithm for comparison purposes. Point spread function (PSF) measurements from metal wires in a water bath showed that the proposed method was able to reduce the size of the PSF and its spatial integral by about 20 to 38%. Images from a commercially available quality-assurance phantom had greater spatial resolution and contrast when reconstructed with the proposed approach.

  15. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  16. Hyperspectral and multispectral imaging for evaluating food safety and quality

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Spectral imaging technologies have been developed rapidly during the past decade. This paper presents hyperspectral and multispectral imaging technologies in the area of food safety and quality evaluation, with an introduction, demonstration, and summarization of the spectral imaging techniques avai...

  17. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  18. Achieving the image interpolation algorithm on the FPGA platform based on ImpulseC

    NASA Astrophysics Data System (ADS)

    Jia, Ge; Peng, Xianrong

    2013-10-01

    ImpulseC is based on the C language which can describe highly parallel and multi-process applications. It also generates a underlying hardware description for the dedicated process. To improve the famous bi-cubic interpolation algorithm, we design the bi-cubic convolution template algorithms with better computing performance and higher efficiency. The results of simulation show that the interpolation method not only improves the interpolation accuracy and image quality, but also preferably retains the texture of the image. Based on ImpulseC hardware design tools, we can make use of the compiler features to further parallelize the algorithm so that it is more conducive to the hardware implementation. Based on the Xilinx Spartan3 of XC3S4000 chip, our method achieves the real-time interpolation at the rate of 50fps. The FPGA experimental results show that the stream of output images after interpolation is robust and real-time. The summary shows that the allocation of hardware resources is reasonable. Compared with the existing hand-written HDL code, it has the advantages of parallel speedup. Our method provides a novel idea from C to FPGA-based embedded hardware system for software engineers.

  19. Quality of Teaching Mathematics and Learning Achievement Gains: Evidence from Primary Schools in Kenya

    ERIC Educational Resources Information Center

    Ngware, Moses W.; Ciera, James; Musyoka, Peter K.; Oketch, Moses

    2015-01-01

    This paper examines the contribution of quality mathematics teaching to student achievement gains. Quality of mathematics teaching is assessed through teacher demonstration of the five strands of mathematical proficiency, the level of cognitive task demands, and teacher mathematical knowledge. Data is based on 1907 grade 6 students who sat for the…

  20. 77 FR 1687 - EPA Workshops on Achieving Water Quality Through Integrated Municipal Stormwater and Wastewater...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-11

    ... AGENCY EPA Workshops on Achieving Water Quality Through Integrated Municipal Stormwater and Wastewater Plans Under the Clean Water Act (CWA) AGENCY: Environmental Protection Agency (EPA). ACTION: Notice... water quality objectives of the CWA. The workshops are intended to assist EPA in developing...

  1. Effects of Secondary School Students' Perceptions of Mathematics Education Quality on Mathematics Anxiety and Achievement

    ERIC Educational Resources Information Center

    Çiftçi, S. Koza

    2015-01-01

    The two aims of this study are as follows: (1) to compare the differences in mathematics anxiety and achievement in secondary school students according to their perceptions of the quality of their mathematics education via a cluster analysis and (2) to test the effects of the perception of mathematics education quality on anxiety and achievement…

  2. Low-Achieving Readers, High Expectations: Image Theatre Encourages Critical Literacy

    ERIC Educational Resources Information Center

    Rozansky, Carol Lloyd; Aagesen, Colleen

    2010-01-01

    Students in an eighth-grade, urban, low-achieving reading class were introduced to critical literacy through engagement in Image Theatre. Developed by liberatory dramatist Augusto Boal, Image Theatre gives participants the opportunity to examine texts in the triple role of interpreter, artist, and sculptor (i.e., image creator). The researchers…

  3. A relationship between slide quality and image quality in whole slide imaging (WSI).

    PubMed

    Yagi, Yukako; Gilbertson, John R

    2008-07-15

    This study examined the effect of tissue section thickness and consistency--parameters outside the direct control of the imaging devices themselves--on WSI capture speed and image quality. Preliminary data indicates that thinner, more consistent tissue sectioning (such as those produced by automated tissue sectioning robots) results in significantly faster WSI capture times and better image quality. A variety of tissue types (including human breast, mouse embryo, mouse brain, etc.) were sectioned using an (AS-200) Automated Tissue Sectioning System (Kurabo Industries, Osaka Japan) at thicknesses from 2 - 9 microm (at one microm intervals) and stained with H&E by a standard method. The resulting slides were imaged with 5 different WSI devices (ScanScope CS, Aperio, CA; iScan, BioImagene, CA; DX40, DMetrix, AZ; NanoZoomer, Hamamatsu Photonics K.K., Japan; Mirax Scan, Carl Zeiss Inc., Germany) with sampling periods of 0.43 - 0.69 microm/pixel. Slides with different tissue thicknesses were compared for image quality, appropriate number of focus points, and overall scanning speed. Thinner sections (i.e. 3 microm sections versus 7 microm) required significantly fewer focus points and had significantly lower (10-15%) capture times. Improvement was seen with all devices and tissues tested. Furthermore, a panel of experienced pathologist judged image quality to be significantly better (for example, with better apparent resolution of nucleoli) with the thinner sections. Automated tissue sectioning is a very new technology; however, the AS-200 seems to be able to produce thinner, more consistent, flatter sections than manual methods at reasonably high throughput. The resulting tissue sections seem to be easier for a WSI system's focusing systems to deal with (compared to manually cut slides). Teaming an automated tissue-sectioning device with a WSI device shows promise in producing faster WSI throughput with better image quality.

  4. Development and application of a process window for achieving high-quality coating in a fluidized bed coating process.

    PubMed

    Laksmana, F L; Hartman Kok, P J A; Vromans, H; Frijlink, H W; Van der Voort Maarschalk, K

    2009-01-01

    Next to the coating formulation, process conditions play important roles in determining coating quality. This study aims to develop an operational window that separates layering from agglomeration regimes and, furthermore, the one that leads to the best coating quality in a fluidized bed coater. The bed relative humidity and the droplet size of the coating aerosol were predicted using a set of engineering models. The coating quality was characterized using a quantitative image analysis method, which measures the coating thickness distribution, the total porosity, and the pore size in the coating. The layering regime can be achieved by performing the coating process at a certain excess of the viscous Stokes number (DeltaSt(v)). This excess is dependent on the given bed relative humidity and droplet size. The higher the bed relative humidity, the higher is the DeltaSt(v) required to keep the process in the layering regime. Further, it is shown that using bed relative humidity and droplet size alone is not enough to obtain constant coating quality. The changes in bed relative humidity and droplet size have been identified to correlate to the fractional area of particles sprayed per unit of time. This parameter can effectively serve as an additional parameter to be considered for a better control on the coating quality. High coating quality is shown to be achieved by performing the process close to saturation and spraying droplets small enough to obtain high spraying rate, but not too small to cause incomplete coverage of the core particles.

  5. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  6. NMF-Based Image Quality Assessment Using Extreme Learning Machine.

    PubMed

    Wang, Shuigen; Deng, Chenwei; Lin, Weisi; Huang, Guang-Bin; Zhao, Baojun

    2017-01-01

    Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.

  7. Finger vein image quality evaluation using support vector machines

    NASA Astrophysics Data System (ADS)

    Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2013-02-01

    In an automatic finger-vein recognition system, finger-vein image quality is significant for segmentation, enhancement, and matching processes. In this paper, we propose a finger-vein image quality evaluation method using support vector machines (SVMs). We extract three features including the gradient, image contrast, and information capacity from the input image. An SVM model is built on the training images with annotated quality labels (i.e., high/low) and then applied to unseen images for quality evaluation. To resolve the class-imbalance problem in the training data, we perform oversampling for the minority class with random-synthetic minority oversampling technique. Cross-validation is also employed to verify the reliability and stability of the learned model. Our experimental results show the effectiveness of our method in evaluating the quality of finger-vein images, and by discarding low-quality images detected by our method, the overall finger-vein recognition performance is considerably improved.

  8. Blind image quality assessment using a general regression neural network.

    PubMed

    Li, Chaofeng; Bovik, Alan Conrad; Wu, Xiaojun

    2011-05-01

    We develop a no-reference image quality assessment (QA) algorithm that deploys a general regression neural network (GRNN). The new algorithm is trained on and successfully assesses image quality, relative to human subjectivity, across a range of distortion types. The features deployed for QA include the mean value of phase congruency image, the entropy of phase congruency image, the entropy of the distorted image, and the gradient of the distorted image. Image quality estimation is accomplished by approximating the functional relationship between these features and subjective mean opinion scores using a GRNN. Our experimental results show that the new method accords closely with human subjective judgment.

  9. Image quality metrics for optical coherence angiography

    PubMed Central

    Lozzi, Andrea; Agrawal, Anant; Boretsky, Adam; Welle, Cristin G.; Hammer, Daniel X.

    2015-01-01

    We characterized image quality in optical coherence angiography (OCA) en face planes of mouse cortical capillary network in terms of signal-to-noise ratio (SNR) and Weber contrast (Wc) through a novel mask-based segmentation method. The method was used to compare two adjacent B-scan processing algorithms, (1) average absolute difference (AAD) and (2) standard deviation (SD), while varying the number of lateral cross-sections acquired (also known as the gate length, N). AAD and SD are identical at N = 2 and exhibited similar image quality for N<10. However, AAD is relatively less susceptible to bulk tissue motion artifact than SD. SNR and Wc were 15% and 35% higher for AAD from N = 25 to 100. In addition data sets were acquired with two objective lenses with different magnifications to quantify the effect of lateral resolution on fine capillary detection. The lower power objective yielded a significant mean broadening of 17% in Full Width Half Maximum (FWHM) diameter. These results may guide study and device designs for OCA capillary and blood flow quantification. PMID:26203372

  10. Digital mammography--DQE versus optimized image quality in clinical environment: an on site study

    NASA Astrophysics Data System (ADS)

    Oberhofer, Nadia; Fracchetti, Alessandro; Springeth, Margareth; Moroder, Ehrenfried

    2010-04-01

    The intrinsic quality of the detection system of 7 different digital mammography units (5 direct radiography DR; 2 computed radiography CR), expressed by DQE, has been compared with their image quality/dose performances in clinical use. DQE measurements followed IEC 62220-1-2 using a tungsten test object for MTF determination. For image quality assessment two different methods have been applied: 1) measurement of contrast to noise ratio (CNR) according to the European guidelines and 2) contrast-detail (CD) evaluation. The latter was carried out with the phantom CDMAM ver. 3.4 and the commercial software CDMAM Analyser ver. 1.1 (both Artinis) for automated image analysis. The overall image quality index IQFinv proposed by the software has been validated. Correspondence between the two methods has been shown figuring out a linear correlation between CNR and IQFinv. All systems were optimized with respect to image quality and average glandular dose (AGD) within the constraints of automatic exposure control (AEC). For each equipment, a good image quality level was defined by means of CD analysis, and the corresponding CNR value considered as target value. The goal was to achieve for different PMMA-phantom thicknesses constant image quality, that means the CNR target value, at minimum dose. All DR systems exhibited higher DQE and significantly better image quality compared to CR systems. Generally switching, where available, to a target/filter combination with an x-ray spectrum of higher mean energy permitted dose savings at equal image quality. However, several systems did not allow to modify the AEC in order to apply optimal radiographic technique in clinical use. The best ratio image quality/dose was achieved by a unit with a-Se detector and W anode only recently available on the market.

  11. A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

    PubMed Central

    Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

    2014-01-01

    We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

  12. SU-E-I-43: Pediatric CT Dose and Image Quality Optimization

    SciTech Connect

    Stevens, G; Singh, R

    2014-06-01

    Purpose: To design an approach to optimize radiation dose and image quality for pediatric CT imaging, and to evaluate expected performance. Methods: A methodology was designed to quantify relative image quality as a function of CT image acquisition parameters. Image contrast and image noise were used to indicate expected conspicuity of objects, and a wide-cone system was used to minimize scan time for motion avoidance. A decision framework was designed to select acquisition parameters as a weighted combination of image quality and dose. Phantom tests were used to acquire images at multiple techniques to demonstrate expected contrast, noise and dose. Anthropomorphic phantoms with contrast inserts were imaged on a 160mm CT system with tube voltage capabilities as low as 70kVp. Previously acquired clinical images were used in conjunction with simulation tools to emulate images at different tube voltages and currents to assess human observer preferences. Results: Examination of image contrast, noise, dose and tube/generator capabilities indicates a clinical task and object-size dependent optimization. Phantom experiments confirm that system modeling can be used to achieve the desired image quality and noise performance. Observer studies indicate that clinical utilization of this optimization requires a modified approach to achieve the desired performance. Conclusion: This work indicates the potential to optimize radiation dose and image quality for pediatric CT imaging. In addition, the methodology can be used in an automated parameter selection feature that can suggest techniques given a limited number of user inputs. G Stevens and R Singh are employees of GE Healthcare.

  13. Using short-wave infrared imaging for fruit quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    Quality evaluation of agricultural and food products is important for processing, inventory control, and marketing. Fruit size and surface quality are two important quality factors for high-quality fruit such as Medjool dates. Fruit size is usually measured by length that can be done easily by simple image processing techniques. Surface quality evaluation on the other hand requires more complicated design, both in image acquisition and image processing. Skin delamination is considered a major factor that affects fruit quality and its value. This paper presents an efficient histogram analysis and image processing technique that is designed specifically for real-time surface quality evaluation of Medjool dates. This approach, based on short-wave infrared imaging, provides excellent image contrast between the fruit surface and delaminated skin, which allows significant simplification of image processing algorithm and reduction of computational power requirements. The proposed quality grading method requires very simple training procedure to obtain a gray scale image histogram for each quality level. Using histogram comparison, each date is assigned to one of the four quality levels and an optimal threshold is calculated for segmenting skin delamination areas from the fruit surface. The percentage of the fruit surface that has skin delamination can then be calculated for quality evaluation. This method has been implemented and used for commercial production and proven to be efficient and accurate.

  14. Practical guidelines for radiographers to improve computed radiography image quality.

    PubMed

    Pongnapang, N

    2005-10-01

    Computed Radiography (CR) has become a major digital imaging modality in a modern radiological department. CR system changes workflow from the conventional way of using film/screen by employing photostimulable phosphor plate technology. This results in the changing perspectives of technical, artefacts and quality control issues in radiology departments. Guidelines for better image quality in digital medical enterprise include professional guidelines for users and the quality control programme specifically designed to serve the best quality of clinical images. Radiographers who understand technological shift of the CR from conventional method can employ optimization of CR images. Proper anatomic collimation and exposure techniques for each radiographic projection are crucial steps in producing quality digital images. Matching image processing with specific anatomy is also important factor that radiographers should realise. Successful shift from conventional to fully digitised radiology department requires skilful radiographers who utilise the technology and a successful quality control program from teamwork in the department.

  15. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  16. Study of digital mammographic equipments by phantom image quality.

    PubMed

    Mayo, P; Rodenas, F; Verdú, G; Campayo, J M; Villaescusa, J I

    2006-01-01

    Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast-detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is CDMAM 3.4. One of the most extended indexes to measure the image quality in an objective way is the image quality figure (IQF). The aim of this work is to study the image quality of different images contrast-detail phantom CDMAM 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments that facilitates the evaluation of image contrast and detail resolution.

  17. Metric-based no-reference quality assessment of heterogeneous document images

    NASA Astrophysics Data System (ADS)

    Nayef, Nibal; Ogier, Jean-Marc

    2015-01-01

    No-reference image quality assessment (NR-IQA) aims at computing an image quality score that best correlates with either human perceived image quality or an objective quality measure, without any prior knowledge of reference images. Although learning-based NR-IQA methods have achieved the best state-of-the-art results so far, those methods perform well only on the datasets on which they were trained. The datasets usually contain homogeneous documents, whereas in reality, document images come from different sources. It is unrealistic to collect training samples of images from every possible capturing device and every document type. Hence, we argue that a metric-based IQA method is more suitable for heterogeneous documents. We propose a NR-IQA method with the objective quality measure of OCR accuracy. The method combines distortion-specific quality metrics. The final quality score is calculated taking into account the proportions of, and the dependency among different distortions. Experimental results show that the method achieves competitive results with learning-based NR-IQA methods on standard datasets, and performs better on heterogeneous documents.

  18. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  19. Food quality assessment by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Whitworth, Martin B.; Millar, Samuel J.; Chau, Astor

    2010-04-01

    Near infrared reflectance (NIR) spectroscopy is well established in the food industry for rapid compositional analysis of bulk samples. NIR hyperspectral imaging provides new opportunities to measure the spatial distribution of components such as moisture and fat, and to identify and measure specific regions of composite samples. An NIR hyperspectral imaging system has been constructed for food research applications, incorporating a SWIR camera with a cooled 14 bit HgCdTe detector and N25E spectrograph (Specim Ltd, Finland). Samples are scanned in a pushbroom mode using a motorised stage. The system has a spectral resolution of 256 pixels covering a range of 970-2500 nm and a spatial resolution of 320 pixels covering a swathe adjustable from 8 to 300 mm. Images are acquired at a rate of up to 100 lines s-1, enabling samples to be scanned within a few seconds. Data are captured using SpectralCube software (Specim) and analysed using ENVI and IDL (ITT Visual Information Solutions). Several food applications are presented. The strength of individual absorbance bands enables the distribution of particular components to be assessed. Examples are shown for detection of added gluten in wheat flour and to study the effect of processing conditions on fat distribution in chips/French fries. More detailed quantitative calibrations have been developed to study evolution of the moisture distribution in baguettes during storage at different humidities, to assess freshness of fish using measurements of whole cod and fillets, and for prediction of beef quality by identification and separate measurement of lean and fat regions.

  20. The Relationship between University Students' Academic Achievement and Perceived Organizational Image

    ERIC Educational Resources Information Center

    Polat, Soner

    2011-01-01

    The purpose of present study was to determine the relationship between university students' academic achievement and perceived organizational image. The sample of the study was the senior students at the faculties and vocational schools in Umuttepe Campus at Kocaeli University. Because the development of organizational image is a long process, the…

  1. Wavelet image processing applied to optical and digital holography: past achievements and future challenges

    NASA Astrophysics Data System (ADS)

    Jones, Katharine J.

    2005-08-01

    The link between wavelets and optics goes back to the work of Dennis Gabor who both invented holography and developed Gabor decompositions. Holography involves 3-D images. Gabor decompositions involves 1-D signals. Gabor decompositions are the predecessors of wavelets. Wavelet image processing of holography, both optical holography and digital holography, will be examined with respect to past achievements and future challenges.

  2. Reduced-reference image quality assessment using moment method

    NASA Astrophysics Data System (ADS)

    Yang, Diwei; Shen, Yuantong; Shen, Yongluo; Li, Hongwei

    2016-10-01

    Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.

  3. Quality Assessment of Sharpened Images: Challenges, Methodology, and Objective Metrics.

    PubMed

    Krasula, Lukas; Le Callet, Patrick; Fliegel, Karel; Klima, Milos

    2017-01-10

    Most of the effort in image quality assessment (QA) has been so far dedicated to the degradation of the image. However, there are also many algorithms in the image processing chain that can enhance the quality of an input image. These include procedures for contrast enhancement, deblurring, sharpening, up-sampling, denoising, transfer function compensation, etc. In this work, possible strategies for the quality assessment of sharpened images are investigated. This task is not trivial because the sharpening techniques can increase the perceived quality, as well as introduce artifacts leading to the quality drop (over-sharpening). Here, the framework specifically adapted for the quality assessment of sharpened images and objective metrics comparison in this context is introduced. However, the framework can be adopted in other quality assessment areas as well. The problem of selecting the correct procedure for subjective evaluation was addressed and a subjective test on blurred, sharpened, and over-sharpened images was performed in order to demonstrate the use of the framework. The obtained ground-truth data were used for testing the suitability of state-ofthe- art objective quality metrics for the assessment of sharpened images. The comparison was performed by novel procedure using ROC analyses which is found more appropriate for the task than standard methods. Furthermore, seven possible augmentations of the no-reference S3 metric adapted for sharpened images are proposed. The performance of the metric is significantly improved and also superior over the rest of the tested quality criteria with respect to the subjective data.

  4. Mister Sandman, bring me good marks! On the relationship between sleep quality and academic achievement.

    PubMed

    Baert, Stijn; Omey, Eddy; Verhaest, Dieter; Vermeir, Aurélie

    2015-04-01

    There is growing evidence that health factors affect tertiary education success in a causal way. This study assesses the effect of sleep quality on academic achievement at university. To this end, we surveyed 804 students about their sleep quality by means of the Pittsburgh Sleep Quality Index (PSQI) before the start of their first exam period in December 2013 at Ghent University. PSQI scores were merged with course marks in this exam period. Instrumenting PSQI scores by sleep quality during secondary education, we find that increasing total sleep quality with one standard deviation leads to 4.85 percentage point higher course marks. Based on this finding, we suggest that higher education providers might be incentivised to invest part of their resources for social facilities in professional support for students with sleep and other health problems.

  5. Social Capital, Human Capital and Parent-Child Relation Quality: Interacting for Children's Educational Achievement?

    ERIC Educational Resources Information Center

    von Otter, Cecilia; Stenberg, Sten-Åke

    2015-01-01

    We analyse the utility of social capital for children's achievement, and if this utility interacts with family human capital and the quality of the parent-child relationship. Our focus is on parental activities directly related to children's school work. Our data stem from a Swedish cohort born in 1953 and consist of both survey and register data.…

  6. The Relationship of IEP Quality to Curricular Access and Academic Achievement for Students with Disabilities

    ERIC Educational Resources Information Center

    La Salle, Tamika P.; Roach, Andrew T.; McGrath, Dawn

    2013-01-01

    The purpose of this study was to investigate the quality of Individualized Education Programs (IEPs) and its influence on academic achievement, inclusion in general education classrooms, and curricular access for students with disabilities. 130 teachers from the state of Indiana were asked to submit the most recent IEP of one of their students in…

  7. The Effects of Two Intervention Programs on Teaching Quality and Student Achievement

    ERIC Educational Resources Information Center

    Azkiyah, S. N.; Doolaard, Simone; Creemers, Bert P. M.; Van Der Werf, M. P. C.

    2014-01-01

    This paper compares the effectiveness of two interventions aimed to improve teaching quality and student achievement in Indonesia. The first intervention was the use of education standards, while the second one was the combination of education standards with a teacher improvement program. The study involved 50 schools, 52 teachers, and 1660…

  8. Mathematics Teacher Quality: Its Distribution and Relationship with Student Achievement in Turkey

    ERIC Educational Resources Information Center

    Özel, Zeynep Ebrar Yetkiner; Özel, Serkan

    2013-01-01

    A main purpose of the present study was to investigate the distribution of qualified mathematics teachers in relation to students' socioeconomic status (SES), as measured by parental education, among Turkish middle schools. Further, relationships between mathematics teacher quality indicators and students' mathematics achievement were explored.…

  9. Class Size Reduction and Student Achievement: The Potential Tradeoff between Teacher Quality and Class Size

    ERIC Educational Resources Information Center

    Jepsen, Christopher; Rivkin, Steven

    2009-01-01

    This paper investigates the effects of California's billion-dollar class-size-reduction program on student achievement. It uses year-to-year differences in class size generated by variation in enrollment and the state's class-size-reduction program to identify both the direct effects of smaller classes and related changes in teacher quality.…

  10. Friendship Quality and School Achievement: A Longitudinal Analysis during Primary School

    ERIC Educational Resources Information Center

    Zucchetti, Giulia; Candela, Filippo; Sacconi, Beatrice; Rabaglietti, Emanuela

    2015-01-01

    This study examined the longitudinal relationship between friendship quality (positive and negative) and school achievement among 228 school-age children (51% girls, M = 8.09, SD = 0.41). A three-wave cross-lagged analysis was used to determine the direction of influence between these domains across school years. Findings revealed that: (a) school…

  11. Transactional Relationships between Latinos' Friendship Quality and Academic Achievement during the Transition to Middle School

    ERIC Educational Resources Information Center

    Sebanc, Anne M.; Guimond, Amy B.; Lutgen, Jeff

    2016-01-01

    This study investigates whether friendship quality, academic achievement, and mastery goal orientation predict each other across the transition to middle school. Participants were 146 Latino students (75 girls) followed from the end of elementary school through the first year of middle school. Measures included positive and negative friendship…

  12. Gifted Middle School Students' Achievement and Perceptions of Science Classroom Quality during Problem-Based Learning

    ERIC Educational Resources Information Center

    Horak, Anne K.; Galluzzo, Gary R.

    2017-01-01

    The purpose of this study was to explore the effect of problem-based learning (PBL) on student achievement and students' perceptions of classroom quality. A group of students taught using PBL and a comparison group of students taught using traditional instruction were studied. A total of 457 students participated in the study. Pre- and…

  13. Depressive Symptoms in Third-Grade Teachers: Relations to Classroom Quality and Student Achievement

    ERIC Educational Resources Information Center

    McLean, Leigh; Connor, Carol McDonald

    2015-01-01

    This study investigated associations among third-grade teachers' (N = 27) symptoms of depression, quality of the classroom-learning environment (CLE), and students' (N = 523, M[subscript age] = 8.6 years) math and literacy performance. teachers' depressive symptoms in the winter negatively predicted students' spring mathematics achievement. This…

  14. Improving quality and reducing inequities: a challenge in achieving best care

    PubMed Central

    Nicewander, David A.; Qin, Huanying; Ballard, David J.

    2006-01-01

    The health care quality chasm is better described as a gulf for certain segments of the population, such as racial and ethnic minority groups, given the gap between actual care received and ideal or best care quality. The landmark Institute of Medicine report Crossing the Quality Chasm: A New Health System for the 21st Century challenges all health care organizations to pursue six major aims of health care improvement: safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness. “Equity” aims to ensure that quality care is available to all and that the quality of care provided does not differ by race, ethnicity, or other personal characteristics unrelated to a patient's reason for seeking care. Baylor Health Care System is in the unique position of being able to examine the current state of equity in a typical health care delivery system and to lead the way in health equity research. Its organizational vision, “culture of quality,” and involved leadership bode well for achieving equitable best care. However, inequities in access, use, and outcomes of health care must be scrutinized; the moral, ethical, and economic issues they raise and the critical injustice they create must be remedied if this goal is to be achieved. Eliminating any observed inequities in health care must be synergistically integrated with quality improvement. Quality performance indicators currently collected and evaluated indicate that Baylor Health Care System often performs better than the national average. However, there are significant variations in care by age, gender, race/ethnicity, and socioeconomic status that indicate the many remaining challenges in achieving “best care” for all. PMID:16609733

  15. What do users really perceive: probing the subjective image quality

    NASA Astrophysics Data System (ADS)

    Nyman, Göte; Radun, Jenni; Leisti, Tuomas; Oja, Joni; Ojanen, Harri; Olives, Jean-Luc; Vuori, Tero; Häkkinen, Jukka

    2006-01-01

    Image evaluation schemes must fulfill both objective and subjective requirements. Objective image quality evaluation models are often preferred over subjective quality evaluation, because of their fastness and cost-effectiveness. However, the correlation between subjective and objective estimations is often poor. One of the key reasons for this is that it is not known what image features subjects use when they evaluate image quality. We have studied subjective image quality evaluation in the case of image sharpness. We used an Interpretation-based Quality (IBQ) approach, which combines both qualitative and quantitative approaches to probe the observer's quality experience. Here we examine how naive subjects experienced and classified natural images, whose sharpness was changing. Together the psychometric and qualitative information obtained allows the correlation of quantitative evaluation data with its underlying subjective attribute sets. This offers guidelines to product designers and developers who are responsible for image quality. Combining these methods makes the end-user experience approachable and offers new ways to improve objective image quality evaluation schemes.

  16. Effects on MR images compression in tissue classification quality

    NASA Astrophysics Data System (ADS)

    Santalla, H.; Meschino, G.; Ballarin, V.

    2007-11-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of "quality" is essential. What we understand for "quality"? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images.

  17. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    SciTech Connect

    Vaishnav, J. Y. Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  18. Review of spectral imaging technology in biomedical engineering: achievements and challenges.

    PubMed

    Li, Qingli; He, Xiaofu; Wang, Yiting; Liu, Hongying; Xu, Dongrong; Guo, Fangmin

    2013-10-01

    Spectral imaging is a technology that integrates conventional imaging and spectroscopy to get both spatial and spectral information from an object. Although this technology was originally developed for remote sensing, it has been extended to the biomedical engineering field as a powerful analytical tool for biological and biomedical research. This review introduces the basics of spectral imaging, imaging methods, current equipment, and recent advances in biomedical applications. The performance and analytical capabilities of spectral imaging systems for biological and biomedical imaging are discussed. In particular, the current achievements and limitations of this technology in biomedical engineering are presented. The benefits and development trends of biomedical spectral imaging are highlighted to provide the reader with an insight into the current technological advances and its potential for biomedical research.

  19. Automated FMV image quality assessment based on power spectrum statistics

    NASA Astrophysics Data System (ADS)

    Kalukin, Andrew

    2015-05-01

    Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).

  20. The study of surgical image quality evaluation system by subjective quality factor method

    NASA Astrophysics Data System (ADS)

    Zhang, Jian J.; Xuan, Jason R.; Yang, Xirong; Yu, Honggang; Koullick, Edouard

    2016-03-01

    GreenLightTM procedure is an effective and economical way of treatment of benign prostate hyperplasia (BPH); there are almost a million of patients treated with GreenLightTM worldwide. During the surgical procedure, the surgeon or physician will rely on the monitoring video system to survey and confirm the surgical progress. There are a few obstructions that could greatly affect the image quality of the monitoring video, like laser glare by the tissue and body fluid, air bubbles and debris generated by tissue evaporation, and bleeding, just to name a few. In order to improve the physician's visual experience of a laser surgical procedure, the system performance parameter related to image quality needs to be well defined. However, since image quality is the integrated set of perceptions of the overall degree of excellence of an image, or in other words, image quality is the perceptually weighted combination of significant attributes (contrast, graininess …) of an image when considered in its marketplace or application, there is no standard definition on overall image or video quality especially for the no-reference case (without a standard chart as reference). In this study, Subjective Quality Factor (SQF) and acutance are used for no-reference image quality evaluation. Basic image quality parameters, like sharpness, color accuracy, size of obstruction and transmission of obstruction, are used as subparameter to define the rating scale for image quality evaluation or comparison. Sample image groups were evaluated by human observers according to the rating scale. Surveys of physician groups were also conducted with lab generated sample videos. The study shows that human subjective perception is a trustworthy way of image quality evaluation. More systematic investigation on the relationship between video quality and image quality of each frame will be conducted as a future study.

  1. The impact of spectral filtration on image quality in micro-CT system.

    PubMed

    Ren, Liqiang; Ghani, Muhammad U; Wu, Di; Zheng, Bin; Chen, Yong; Yang, Kai; Wu, Xizeng; Liu, Hong

    2016-01-08

    This paper aims to evaluate the impact of spectral filtration on image quality in a microcomputed tomography (micro-CT) system. A mouse phantom comprising 11rods for modeling lung, muscle, adipose, and bones was scanned with 17 s and 2min, respectively. The current (μA) for each scan was adjusted to achieve identical entrance exposure to the phantom, providing a baseline for image quality evaluation. For each region of interest (ROI) within specific composition, CT number variations, noise levels, and contrast-to-noise ratios (CNRs) were evaluated from the reconstructed images. CT number variations and CNRs for bone with high density, muscle, and adipose were compared with theoretical predictions. The results show that the impact of spectral filtration on image quality indicators, such as CNR in a micro-CT system, is significantly associated with tissue characteristics. The findings may provide useful references for optimizing the scanning parameters of general micro-CT systems in future imaging applications.

  2. A comprehensive study on the relationship between the image quality and imaging dose in low-dose cone beam CT

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.

    2012-04-01

    While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4

  3. A comprehensive study on the relationship between the image quality and imaging dose in low-dose cone beam CT.

    PubMed

    Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B

    2012-04-07

    While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4

  4. A comprehensive study on the relationship between image quality and imaging dose in low-dose cone beam CT

    PubMed Central

    Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.

    2012-01-01

    While compressed sensing (CS) based algorithms have been developed for low-dose cone beam CT (CBCT) reconstruction, a clear understanding on the relationship between the image quality and imaging dose at low dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding on the tradeoff between image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image guided radiation therapy (IGRT). Main findings of this work include: 1) Under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40–100 total mAs, depending on the specific IGRT applications. 2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. 3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. 4) The clinically

  5. Dynamic flat panel detector versus image intensifier in cardiac imaging: dose and image quality.

    PubMed

    Vano, E; Geiger, B; Schreiner, A; Back, C; Beissel, J

    2005-12-07

    The practical aspects of the dosimetric and imaging performance of a digital x-ray system for cardiology procedures were evaluated. The system was configured with an image intensifier (II) and later upgraded to a dynamic flat panel detector (FD). Entrance surface air kerma (ESAK) to phantoms of 16, 20, 24 and 28 cm of polymethyl methacrylate (PMMA) and the image quality of a test object were measured. Images were evaluated directly on the monitor and with numerical methods (noise and signal-to-noise ratio). Information contained in the DICOM header for dosimetry audit purposes was also tested. ESAK values per frame (or kerma rate) for the most commonly used cine and fluoroscopy modes for different PMMA thicknesses and for field sizes of 17 and 23 cm for II, and 20 and 25 cm for FD, produced similar results in the evaluated system with both technologies, ranging between 19 and 589 microGy/frame (cine) and 5 and 95 mGy min(-1) (fluoroscopy). Image quality for these dose settings was better for the FD version. The 'study dosimetric report' is comprehensive, and its numerical content is sufficiently accurate. There is potential in the future to set those systems with dynamic FD to lower doses than are possible in the current II versions, especially for digital cine runs, or to benefit from improved image quality.

  6. Dynamic flat panel detector versus image intensifier in cardiac imaging: dose and image quality

    NASA Astrophysics Data System (ADS)

    Vano, E.; Geiger, B.; Schreiner, A.; Back, C.; Beissel, J.

    2005-12-01

    The practical aspects of the dosimetric and imaging performance of a digital x-ray system for cardiology procedures were evaluated. The system was configured with an image intensifier (II) and later upgraded to a dynamic flat panel detector (FD). Entrance surface air kerma (ESAK) to phantoms of 16, 20, 24 and 28 cm of polymethyl methacrylate (PMMA) and the image quality of a test object were measured. Images were evaluated directly on the monitor and with numerical methods (noise and signal-to-noise ratio). Information contained in the DICOM header for dosimetry audit purposes was also tested. ESAK values per frame (or kerma rate) for the most commonly used cine and fluoroscopy modes for different PMMA thicknesses and for field sizes of 17 and 23 cm for II, and 20 and 25 cm for FD, produced similar results in the evaluated system with both technologies, ranging between 19 and 589 µGy/frame (cine) and 5 and 95 mGy min-1 (fluoroscopy). Image quality for these dose settings was better for the FD version. The 'study dosimetric report' is comprehensive, and its numerical content is sufficiently accurate. There is potential in the future to set those systems with dynamic FD to lower doses than are possible in the current II versions, especially for digital cine runs, or to benefit from improved image quality.

  7. Can pictorial images communicate the quality of pain successfully?

    PubMed Central

    Knapp, Peter; Morley, Stephen; Stones, Catherine

    2015-01-01

    Chronic pain is common and difficult for patients to communicate to health professionals. It may include neuropathic elements which require specialised treatment. A little used approach to communicating the quality of pain is through the use of images. This study aimed to test the ability of a set of 12 images depicting different sensory pain qualities to successfully communicate those qualities. Images were presented to 25 student nurses and 38 design students. Students were asked to write down words or phrases describing the quality of pain they felt was being communicated by each image. They were asked to provide as many or as few as occurred to them. The images were extremely heterogeneous in their ability to convey qualities of pain accurately. Only 2 of the 12 images were correctly interpreted by more than 70% of the sample. There was a significant difference between the two student groups, with nurses being significantly better at interpreting the images than the design students. Clearly, attention needs to be given not only to the content of images designed to depict the sensory qualities of pain but also to the differing audiences who may use them. Education, verbal ability, ethnicity and a multiplicity of other factors may influence the understanding and use of such images. Considerable work is needed to develop a set of images which is sufficiently culturally appropriate and effective for general use. PMID:26516574

  8. Can pictorial images communicate the quality of pain successfully?

    PubMed

    Closs, S José; Knapp, Peter; Morley, Stephen; Stones, Catherine

    2015-08-01

    Chronic pain is common and difficult for patients to communicate to health professionals. It may include neuropathic elements which require specialised treatment. A little used approach to communicating the quality of pain is through the use of images. This study aimed to test the ability of a set of 12 images depicting different sensory pain qualities to successfully communicate those qualities. Images were presented to 25 student nurses and 38 design students. Students were asked to write down words or phrases describing the quality of pain they felt was being communicated by each image. They were asked to provide as many or as few as occurred to them. The images were extremely heterogeneous in their ability to convey qualities of pain accurately. Only 2 of the 12 images were correctly interpreted by more than 70% of the sample. There was a significant difference between the two student groups, with nurses being significantly better at interpreting the images than the design students. Clearly, attention needs to be given not only to the content of images designed to depict the sensory qualities of pain but also to the differing audiences who may use them. Education, verbal ability, ethnicity and a multiplicity of other factors may influence the understanding and use of such images. Considerable work is needed to develop a set of images which is sufficiently culturally appropriate and effective for general use.

  9. Investigation into the impact of tone reproduction on the perceived image quality of fine art reproductions

    NASA Astrophysics Data System (ADS)

    Farnand, Susan; Jiang, Jun; Frey, Franziska

    2012-01-01

    A project, supported by the Andrew W. Mellon Foundation, evaluating current practices in fine art image reproduction, determining the image quality generally achievable, and establishing a suggested framework for art image interchange was recently completed. (Information regarding the Mellon project and related work may be found at www.artimaging.rit.edu.) To determine the image quality currently being achieved, experimentation was conducted in which a set of objective targets and pieces of artwork in various media were imaged by participating museums and other cultural heritage institutions. Prints and images for display made from the delivered image files at the Rochester Institute of Technology were used as stimuli in psychometric testing in which observers were asked to evaluate the prints as reproductions of the original artwork and as stand alone images. The results indicated that there were limited differences between assessments made with and without the original present for printed reproductions. For displayed images, the differences were more significant with lower contrast images being ranked lower and higher contrast images generally ranked higher when the original was not present. This was true for experiments conducted both in a dimly lit laboratory as well as via the web, indicating that more than viewing conditions were driving this shift.

  10. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    PubMed

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.

  11. Effect of image quality on calcification detection in digital mammography

    PubMed Central

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  12. Effect of image quality on calcification detection in digital mammography

    SciTech Connect

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-06-15

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  13. Image reconstruction and image quality evaluation for a 16-slice CT scanner.

    PubMed

    Flohr, Th; Stierstorfer, K; Bruder, H; Simon, J; Polacin, A; Schaller, S

    2003-05-01

    We present a theoretical overview and a performance evaluation of a novel approximate reconstruction algorithm for cone-beam spiral CT, the adaptive multiple plane reconstruction (AMPR), which has been introduced by Schaller, Flohr et al. [Proc. SPIE Int. Symp. Med. Imag. 4322, 113-127 (2001)] AMPR has been implemented in a recently introduced 16-slice CT scanner. We present a detailed algorithmic description of AMPR which allows for a free selection of the spiral pitch. We show that dose utilization is better than 90% independent of the pitch. We give an overview on the z-reformation functions chosen to allow for a variable selection of the spiral slice width at arbitrary pitch values. To investigate AMPR image quality we present images of anthropomorphic phantoms and initial patient results. We present measurements of spiral slice sensitivity profiles (SSPs) and measurements of the maximum achievable transverse resolution, both in the isocenter and off-center. We discuss the pitch dependence of image noise measured in a centered 20 cm water phantom. Using the AMPR approach, cone-beam artifacts are considerably reduced for the 16-slice scanner investigated. Image quality in MPRs is independent of the pitch and equivalent to a single-slice CT system at pitch p approximately 1.5. The full width at half-maximum (FWHM) of the spiral SSPs shows only minor variations as a function of the pitch, nominal, and measured values differ by less than 0.2 mm. With 16 x 0.75 mm collimation, the measured FWHM of the smallest reconstructed slice is about 0.9 mm. Using this slice width and overlapping image reconstruction, cylindrical holes with 0.6 mm diameter can be resolved in a z-resolution phantom. Image noise for constant effective mAs is nearly independent of the pitch. Measured and theoretically expected dose utilization are in good agreement. Meanwhile, clinical practice has demonstrated the excellent image quality and the increased diagnostic capability that is obtained

  14. Fractal analysis for reduced reference image quality assessment.

    PubMed

    Xu, Yong; Liu, Delei; Quan, Yuhui; Le Callet, Patrick

    2015-07-01

    In this paper, multifractal analysis is adapted to reduced-reference image quality assessment (RR-IQA). A novel RR-QA approach is proposed, which measures the difference of spatial arrangement between the reference image and the distorted image in terms of spatial regularity measured by fractal dimension. An image is first expressed in Log-Gabor domain. Then, fractal dimensions are computed on each Log-Gabor subband and concatenated as a feature vector. Finally, the extracted features are pooled as the quality score of the distorted image using l1 distance. Compared with existing approaches, the proposed method measures image quality from the perspective of the spatial distribution of image patterns. The proposed method was evaluated on seven public benchmark data sets. Experimental results have demonstrated the excellent performance of the proposed method in comparison with state-of-the-art approaches.

  15. Raman chemical imaging system for food safety and quality inspection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Raman chemical imaging technique combines Raman spectroscopy and digital imaging to visualize composition and structure of a target, and it offers great potential for food safety and quality research. In this study, a laboratory-based Raman chemical imaging platform was designed and developed. The i...

  16. The simulation of adaptive optical image even and pulse noise and research of image quality evaluation

    NASA Astrophysics Data System (ADS)

    Wen, Changli; Xu, Yuannan; Xu, Rong; Liu, Changhai; Men, Tao; Niu, Wei

    2013-09-01

    As optical image becomes more and more important in adaptive optics area, and adaptive optical telescopes play a more and more important role in the detection system on the ground, and the images we get are so many that we need find a suitable method to choose good quality images automatically in order to save human power, people pay more and more attention in image's evaluation methods and their characteristics. According to different image degradation model, the applicability of different image's quality evaluation method will be different. Researchers have paid most attention in how to improve or build new method to evaluate degraded images. Now we should change our way to take some research in the models of degradation of images, the reasons of image degradation, and the relations among different degraded images and different image quality evaluation methods. In this paper, we build models of even noise and pulse noise based on their definition and get degraded images using these models, and we take research in six kinds of usual image quality evaluation methods such as square error method, sum of multi-power of grey scale method, entropy method, Fisher function method, Sobel method, and sum of grads method, and we make computer software for these methods to use easily to evaluate all kinds of images input. Then we evaluate the images' qualities with different evaluation methods and analyze the results of six kinds of methods, and finally we get many important results. Such as the characteristics of every method for evaluating qualities of degraded images of even noise, the characteristics of every method for evaluating qualities of degraded images of pulse noise, and the best method to evaluate images which affected by tow kinds of noise both and the characteristics of this method. These results are important to image's choosing automatically, and this will help we to manage the images we get through adaptive optical telescopes base on the ground.

  17. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  18. Automated quality assessment in three-dimensional breast ultrasound images.

    PubMed

    Schwaab, Julia; Diez, Yago; Oliver, Arnau; Martí, Robert; van Zelst, Jan; Gubern-Mérida, Albert; Mourri, Ahmed Bensouda; Gregori, Johannes; Günther, Matthias

    2016-04-01

    Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects.

  19. Teacher-child relationship quality and academic achievement of Chinese American children in immigrant families.

    PubMed

    Ly, Jennifer; Zhou, Qing; Chu, Keira; Chen, Stephen H

    2012-08-01

    This study examined the cross-sectional relations between teacher-child relationship quality (TCRQ) and math and reading achievement in a socio-economically diverse sample of Chinese American first- and second-grade children in immigrant families (N=207). Teachers completed a questionnaire measuring TCRQ dimensions including closeness, conflict, and intimacy, and children completed a questionnaire measuring overall TCRQ. Standardized tests were used to assess children's math and reading skills. Analyses were conducted to (a) test the factor structure of measures assessing TCRQ among Chinese American children, (b) examine the associations between teacher- and child-rated TCRQ and children's academic achievement, controlling for demographic characteristics, and (c) examine the potential role of child gender as a moderator in the relations between TCRQ and achievement. Results indicated that teacher-rated TCRQ Warmth was positively associated with Chinese American children's reading achievement. Two child gender-by-TCRQ interactions were found: (a) teacher-rated TCRQ Conflict was negatively associated with girls' (but not boys') math achievement, and (b) child-rated Overall TCRQ was positively associated with boys' (but not girls') reading achievement. These findings highlight the valuable role of TCRQ in the academic success of school-aged children in immigrant families.

  20. Analyzing and Improving Image Quality in Reflective Ghost Imaging

    DTIC Science & Technology

    2011-02-01

    imaging." Phys. Rev. A 79, 023833 (2009). [7] R . E . Meyers , K. S. Deacon. and Y. Shih, "Ghost-imaging experiment by measuring reflected photons," Phys...Rev. A 77, 041801 (2008). [8] R . E . Meyers and K. S. Deacon, "Quantum ghost imaging experiments at ARL," Proc. SPIE 7815. 781501 (2010). [9] J. H

  1. Reducing radiation to patients and improving image quality in a real-world nuclear cardiology laboratory.

    PubMed

    Bloom, Stephen A; Meyers, Karen

    2017-03-22

    In part because of aging equipment and reduced reimbursement for imaging services in the last several years, nuclear cardiologists who remain in private practice face challenges in maintaining high quality and in reducing radiation exposure to patients. We review patient-centered approaches and affordable software solutions employed in our practice combined with supine-prone myocardial perfusion imaging to achieve increased interpretive confidence with reduced radiation exposure to patients.

  2. No-reference visual quality assessment for image inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Frantc, V. A.; Marchuk, V. I.; Sherstobitov, A. I.; Egiazarian, K.

    2015-03-01

    Inpainting has received a lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction approaches. In many cases inpainting methods introduce a blur in sharp transitions in image and image contours in the recovery of large areas with missing pixels and often fail to recover curvy boundary edges. Quantitative metrics of inpainting results currently do not exist and researchers use human comparisons to evaluate their methodologies and techniques. Most objective quality assessment methods rely on a reference image, which is often not available in inpainting applications. Usually researchers use subjective quality assessment by human observers. It is difficult and time consuming procedure. This paper focuses on a machine learning approach for no-reference visual quality assessment for image inpainting based on the human visual property. Our method is based on observation that Local Binary Patterns well describe local structural information of the image. We use a support vector regression learned on assessed by human images to predict perceived quality of inpainted images. We demonstrate how our predicted quality value correlates with qualitative opinion in a human observer study. Results are shown on a human-scored dataset for different inpainting methods.

  3. Image quality transfer and applications in diffusion MRI.

    PubMed

    Alexander, Daniel C; Zikic, Darko; Ghosh, Aurobrata; Tanno, Ryutaro; Wottschel, Viktor; Zhang, Jiaying; Kaden, Enrico; Dyrby, Tim B; Sotiropoulos, Stamatios N; Zhang, Hui; Criminisi, Antonio

    2017-03-03

    This paper introduces a new computational imaging technique called image quality transfer (IQT). IQT uses machine learning to transfer the rich information available from one-off experimental medical imaging devices to the abundant but lower-quality data from routine acquisitions. The procedure uses matched pairs to learn mappings from low-quality to corresponding high-quality images. Once learned, these mappings then augment unseen low quality images, for example by enhancing image resolution or information content. Here, we demonstrate IQT using a simple patch-regression implementation and the uniquely rich diffusion MRI data set from the human connectome project (HCP). Results highlight potential benefits of IQT in both brain connectivity mapping and microstructure imaging. In brain connectivity mapping, IQT reveals, from standard data sets, thin connection pathways that tractography normally requires specialised data to reconstruct. In microstructure imaging, IQT shows potential in estimating, from standard "single-shell" data (one non-zero b-value), maps of microstructural parameters that normally require specialised multi-shell data. Further experiments show strong generalisability, highlighting IQT's benefits even when the training set does not directly represent the application domain. The concept extends naturally to many other imaging modalities and reconstruction problems.

  4. Meat quality evaluation by hyperspectral imaging technique: an overview.

    PubMed

    Elmasry, Gamal; Barbin, Douglas F; Sun, Da-Wen; Allen, Paul

    2012-01-01

    During the last two decades, a number of methods have been developed to objectively measure meat quality attributes. Hyperspectral imaging technique as one of these methods has been regarded as a smart and promising analytical tool for analyses conducted in research and industries. Recently there has been a renewed interest in using hyperspectral imaging in quality evaluation of different food products. The main inducement for developing the hyperspectral imaging system is to integrate both spectroscopy and imaging techniques in one system to make direct identification of different components and their spatial distribution in the tested product. By combining spatial and spectral details together, hyperspectral imaging has proved to be a promising technology for objective meat quality evaluation. The literature presented in this paper clearly reveals that hyperspectral imaging approaches have a huge potential for gaining rapid information about the chemical structure and related physical properties of all types of meat. In addition to its ability for effectively quantifying and characterizing quality attributes of some important visual features of meat such as color, quality grade, marbling, maturity, and texture, it is able to measure multiple chemical constituents simultaneously without monotonous sample preparation. Although this technology has not yet been sufficiently exploited in meat process and quality assessment, its potential is promising. Developing a quality evaluation system based on hyperspectral imaging technology to assess the meat quality parameters and to ensure its authentication would bring economical benefits to the meat industry by increasing consumer confidence in the quality of the meat products. This paper provides a detailed overview of the recently developed approaches and latest research efforts exerted in hyperspectral imaging technology developed for evaluating the quality of different meat products and the possibility of its widespread

  5. Image quality assessment by preprocessing and full reference model combination

    NASA Astrophysics Data System (ADS)

    Bianco, S.; Ciocca, G.; Marini, F.; Schettini, R.

    2009-01-01

    This paper focuses on full-reference image quality assessment and presents different computational strategies aimed to improve the robustness and accuracy of some well known and widely used state of the art models, namely the Structural Similarity approach (SSIM) by Wang and Bovik and the S-CIELAB spatial-color model by Zhang and Wandell. We investigate the hypothesis that combining error images with a visual attention model could allow a better fit of the psycho-visual data of the LIVE Image Quality assessment Database Release 2. We show that the proposed quality assessment metric better correlates with the experimental data.

  6. Image Quality Assessment Based on Inter-Patch and Intra-Patch Similarity

    PubMed Central

    Zhou, Fei; Lu, Zongqing; Wang, Can; Sun, Wen; Xia, Shu-Tao; Liao, Qingmin

    2015-01-01

    In this paper, we propose a full-reference (FR) image quality assessment (IQA) scheme, which evaluates image fidelity from two aspects: the inter-patch similarity and the intra-patch similarity. The scheme is performed in a patch-wise fashion so that a quality map can be obtained. On one hand, we investigate the disparity between one image patch and its adjacent ones. This disparity is visually described by an inter-patch feature, where the hybrid effect of luminance masking and contrast masking is taken into account. The inter-patch similarity is further measured by modifying the normalized correlation coefficient (NCC). On the other hand, we also attach importance to the impact of image contents within one patch on the IQA problem. For the intra-patch feature, we consider image curvature as an important complement of image gradient. According to local image contents, the intra-patch similarity is measured by adaptively comparing image curvature and gradient. Besides, a nonlinear integration of the inter-patch and intra-patch similarity is presented to obtain an overall score of image quality. The experiments conducted on six publicly available image databases show that our scheme achieves better performance in comparison with several state-of-the-art schemes. PMID:25793282

  7. Impact of varying transmission bandwidth on image quality.

    PubMed

    Broderick, T J; Harnett, B M; Merriam, N R; Kapoor, V; Doarn, C R; Merrell, R C

    2001-01-01

    The objective of this paper is to determine the effect of varying transmission bandwidth on image quality in laparoscopic surgery. Surgeons located in remote operating rooms connected through a telemedicine link must be able to transmit medical images for interaction. Image clarity and color fidelity are of critical importance in telementoring laparoscopic procedures. The clarity of laparoscopic images was measured by assessing visual acuity using a video image of a Snellen eye chart obtained with standard diameter laparoscopes (2, 5, and 10 mm). The clarity of the local image was then compared to that of remote images transmitted using various bandwidths and connection protocols [33.6 Kbps POTS (IP), 128 Kbps ISDN, 384 Kbps ISDN, 10 Mbps LAN (IP)]. The laparoscopes were subsequently used to view standard color placards. These color images were sent via similar transmission bandwidths and connection protocols. The local and remote images of the color placards were compared to determine the effect of the transmission protocols on color fidelity. Use of laparoscopes of different diameter does not significantly affect image clarity or color fidelity as long as the laparoscopes are positioned at their optimal working distance. Decreasing transmission bandwidth does not significantly affect image clarity or color fidelity when sufficient time is allowed for the algorithms to redraw the remote image. Remote telementoring of laparoscopic procedures is feasible. However, low bandwidth connections require slow and/or temporarily stopped camera movements for the quality of the remote video image to approximate that of the local video image.

  8. HgCdTe Detectors for Space and Science Imaging: General Issues and Latest Achievements

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Cervera, C.; Baier, N.; Lobre, C.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.

    2016-09-01

    HgCdTe (MCT) is a very versatile material system for infrared (IR) detection, suitable for high performance detection in a wide range of applications and spectral ranges. Indeed, the ability to tailor the cutoff frequency as close as possible to the needs makes it a perfect candidate for high performance detection. Moreover, the high quality material available today, grown either by molecular beam epitaxy or liquid phase epitaxy, allows for very low dark currents at low temperatures, suitable for low flux detection applications such as science imaging. MCT has also demonstrated robustness to the aggressive environment of space and faces, therefore, a large demand for space applications. A satellite may stare at the earth, in which case detection usually involves a lot of photons, called a high flux scenario. Alternatively, a satellite may stare at outer space for science purposes, in which case the detected photon number is very low, leading to low flux scenarios. This latter case induces very strong constraints onto the detector: low dark current, low noise, (very) large focal plane arrays. The classical structure used to fulfill those requirements are usually p/ n MCT photodiodes. This type of structure has been deeply investigated in our laboratory for different spectral bands, in collaboration with the CEA Astrophysics lab. However, another alternative may also be investigated with low excess noise: MCT n/ p avalanche photodiodes (APD). This paper reviews the latest achievements obtained on this matter at DEFIR (LETI and Sofradir common laboratory) from the short wave infrared (SWIR) band detection for classical astronomical needs, to long wave infrared (LWIR) band for exoplanet transit spectroscopy, up to very long wave infrared (VLWIR) bands. The different available diode architectures ( n/ p VHg or p/ n, or even APDs) are reviewed, including different available ROIC architectures for low flux detection.

  9. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  10. Photoreceptor waveguides and effective retinal image quality

    NASA Astrophysics Data System (ADS)

    Vohnsen, Brian

    2007-03-01

    Individual photoreceptor waveguiding suggests that the entire retina can be considered as a composite fiber-optic element relating a retinal image to a corresponding waveguided image. In such a scheme, a visual sensation is produced only when the latter interacts with the pigments of the outer photoreceptor segments. Here the possible consequences of photoreceptor waveguiding on vision are studied with important implications for the pupil-apodization method commonly used to incorporate directional effects of the retina. In the absence of aberrations, it is found that the two approaches give identical predictions for an effective retinal image only when the pupil apodization is chosen twice as narrow as suggested by the traditional Stiles-Crawford effect. In addition, phase variations in the retinal field due to ocular aberrations can delicately alter a waveguided image, and this may provide plausible justification for an improved visual sensation as compared with what should be expected on the grounds of a retinal image only.

  11. Blind noisy image quality evaluation using a deformable ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Li; Huang, Xiaotong; Tian, Jing; Fu, Xiaowei

    2014-04-01

    The objective of blind noisy image quality assessment is to evaluate the quality of the degraded noisy image without the knowledge of the ground truth image. Its performance relies on the accuracy of the noise statistics estimated from homogenous blocks. The major challenge of block-based approaches lies in the block size selection, as it affects the local noise derivation. To tackle this challenge, a deformable ant colony optimization (DACO) approach is proposed in this paper to adaptively adjust the ant size for image block selection. The proposed DACO approach considers that the size of the ant is adjustable during foraging. For the smooth image blocks, more pheromone is deposited, and then the size of ant is increased. Therefore, this strategy enables the ants to have dynamic food-search capability, leading to more accurate selection of homogeneous blocks. Furthermore, the regression analysis is used to obtain image quality score by exploiting the above-estimated noise statistics. Experimental results are provided to justify that the proposed approach outperforms conventional approaches to provide more accurate noise statistics estimation and achieve a consistent image quality evaluation performance for both the artificially generated and real-world noisy images.

  12. Image Quality of the Evryscope: Method for On-Site Optical Alignment

    NASA Astrophysics Data System (ADS)

    Wulfken, Philip J.; Law, Nicholas M.; Ratzloff, Jeffrey; Fors, Octavi

    2015-01-01

    Previous wide field surveys have been conducted by taking many images each night to cover thousands of square degrees. The Evryscope is a new type of system designed to search for transiting exoplanets around nearby bright stars, M-dwarfs, white dwarfs, and other transients. The Evryscope is an array of 70 mm telescopes that will continuously image 10200 square degrees of the night sky at once. One of the image quality requirements is for the PSFs to be well-sampled at two pixels across and it was found that tilt caused by slight misalignment between the optics and the CCD increased the size of the FWHM towards the edges and corners of the image. Here we describe the image quality of the Evryscope cameras and the alignment procedure to achieve the required 2 pixel FWHM.

  13. Quality improvement in diabetes--successful in achieving better care with hopes for prevention.

    PubMed

    Haw, J Sonya; Narayan, K M Venkat; Ali, Mohammed K

    2015-09-01

    Diabetes affects 29 million Americans and is associated with billions of dollars in health expenditures and lost productivity. Robust evidence has shown that lifestyle interventions in people at high risk for diabetes and comprehensive management of cardiometabolic risk factors like glucose, blood pressure, and lipids can delay the onset of diabetes and its complications, respectively. However, realizing the "triple aim" of better health, better care, and lower cost in diabetes has been hampered by low adoption of lifestyle interventions to prevent diabetes and poor achievement of care goals for those with diabetes. To achieve better care, a number of quality improvement (QI) strategies targeting the health system, healthcare providers, and/or patients have been evaluated in both controlled trials and real-world programs, and have shown some successes, though barriers still impede wider adoption, effectiveness, real-world feasibility, and scalability. Here, we summarize the effectiveness and cost-effectiveness data regarding QI strategies in diabetes care and discuss the potential role of quality monitoring and QI in trying to implement primary prevention of diabetes more widely and effectively. Over time, achieving better care and better health will likely help bend the ever-growing cost curve.

  14. MEO based secured, robust, high capacity and perceptual quality image watermarking in DWT-SVD domain.

    PubMed

    Gunjal, Baisa L; Mali, Suresh N

    2015-01-01

    The aim of this paper is to present multiobjective evolutionary optimizer (MEO) based highly secured and strongly robust image watermarking technique using discrete wavelet transform (DWT) and singular value decomposition (SVD). Many researchers have failed to achieve optimization of perceptual quality and robustness with high capacity watermark embedding. Here, we achieved optimized peak signal to noise ratio (PSNR) and normalized correlation (NC) using MEO. Strong security is implemented through eight different security levels including watermark scrambling by Fibonacci-Lucas transformation (FLT). Haar wavelet is selected for DWT decomposition to compare practical performance of wavelets from different wavelet families. The technique is non-blind and tested with cover images of size 512x512 and grey scale watermark of size 256x256. The achieved perceptual quality in terms of PSNR is 79.8611dBs for Lena, 87.8446 dBs for peppers and 93.2853 dBs for lake images by varying scale factor K1 from 1 to 5. All candidate images used for testing namely Lena, peppers and lake images show exact recovery of watermark giving NC equals to 1. The robustness is tested against variety of attacks on watermarked image. The experimental demonstration proved that proposed method gives NC more than 0.96 for majority of attacks under consideration. The performance evaluation of this technique is found superior to all existing hybrid image watermarking techniques under consideration.

  15. Quantity and Quality of Computer Use and Academic Achievement: Evidence from a Large-Scale International Test Program

    ERIC Educational Resources Information Center

    Cheema, Jehanzeb R.; Zhang, Bo

    2013-01-01

    This study looked at the effect of both quantity and quality of computer use on achievement. The Program for International Student Assessment (PISA) 2003 student survey comprising of 4,356 students (boys, n = 2,129; girls, n = 2,227) was used to predict academic achievement from quantity and quality of computer use while controlling for…

  16. The use of modern electronic flat panel devices for image guided radiation therapy:. Image quality comparison, intra fraction motion monitoring and quality assurance applications

    NASA Astrophysics Data System (ADS)

    Nill, S.; Stützel, J.; Häring, P.; Oelfke, U.

    2008-06-01

    With modern radiotherapy delivery techniques like intensity modulated radiotherapy (IMRT) it is possible to delivery a more conformal dose distribution to the tumor while better sparing the organs at risk (OAR) compared to 3D conventional radiation therapy. Due to the theoretically high dose conformity achievable it is very important to know the exact position of the target volume during the treatment. With more and more modern linear accelerators equipped with imaging devices this is now almost possible. These imaging devices are using energies between 120kV and 6MV and therefore different detector systems are used but the vast majority is using amorphous silicon flat panel devices with different scintilator screens and build up materials. The technical details and the image quality of these systems are discussed and first results of the comparison are presented. In addition new methods to deal with motion management and quality assurance procedures are shortly discussed.

  17. Perceived no reference image quality measurement for chromatic aberration

    NASA Astrophysics Data System (ADS)

    Lamb, Anupama B.; Khambete, Madhuri

    2016-03-01

    Today there is need for no reference (NR) objective perceived image quality measurement techniques as conducting subjective experiments and making reference image available is a very difficult task. Very few NR perceived image quality measurement algorithms are available for color distortions like chromatic aberration (CA), color quantization with dither, and color saturation. We proposed NR image quality assessment (NR-IQA) algorithms for images distorted with CA. CA is mostly observed in images taken with digital cameras, having higher sensor resolution with inexpensive lenses. We compared our metric performance with two state-of-the-art NR blur techniques, one full reference IQA technique and three general-purpose NR-IQA techniques, although they are not tailored for CA. We used a CA dataset in the TID-2013 color image database to evaluate performance. Proposed algorithms give comparable performance with state-of-the-art techniques in terms of performance parameters and outperform them in terms of monotonicity and computational complexity. We have also discovered that the proposed CA algorithm best predicts perceived image quality of images distorted with realistic CA.

  18. Impact of Computed Tomography Image Quality on Image-Guided Radiation Therapy Based on Soft Tissue Registration

    SciTech Connect

    Morrow, Natalya V.; Lawton, Colleen A.; Qi, X. Sharon; Li, X. Allen

    2012-04-01

    Purpose: In image-guided radiation therapy (IGRT), different computed tomography (CT) modalities with varying image quality are being used to correct for interfractional variations in patient set-up and anatomy changes, thereby reducing clinical target volume to the planning target volume (CTV-to-PTV) margins. We explore how CT image quality affects patient repositioning and CTV-to-PTV margins in soft tissue registration-based IGRT for prostate cancer patients. Methods and Materials: Four CT-based IGRT modalities used for prostate RT were considered in this study: MV fan beam CT (MVFBCT) (Tomotherapy), MV cone beam CT (MVCBCT) (MVision; Siemens), kV fan beam CT (kVFBCT) (CTVision, Siemens), and kV cone beam CT (kVCBCT) (Synergy; Elekta). Daily shifts were determined by manual registration to achieve the best soft tissue agreement. Effect of image quality on patient repositioning was determined by statistical analysis of daily shifts for 136 patients (34 per modality). Inter- and intraobserver variability of soft tissue registration was evaluated based on the registration of a representative scan for each CT modality with its corresponding planning scan. Results: Superior image quality with the kVFBCT resulted in reduced uncertainty in soft tissue registration during IGRT compared with other image modalities for IGRT. The largest interobserver variations of soft tissue registration were 1.1 mm, 2.5 mm, 2.6 mm, and 3.2 mm for kVFBCT, kVCBCT, MVFBCT, and MVCBCT, respectively. Conclusions: Image quality adversely affects the reproducibility of soft tissue-based registration for IGRT and necessitates a careful consideration of residual uncertainties in determining different CTV-to-PTV margins for IGRT using different image modalities.

  19. Dosimetry and image quality assessment in a direct radiography system

    PubMed Central

    Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro

    2014-01-01

    Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119

  20. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  1. Web-based psychometric evaluation of image quality

    NASA Astrophysics Data System (ADS)

    Sprow, Iris; Baranczuk, Zofia; Stamm, Tobias; Zolliker, Peter

    2009-01-01

    The measurement of image quality requires the judgement by the human visual system. This paper describes a psycho-visual test technique that uses the internet as a test platform to identify image quality in a more time-effective manner, comparing the visual response data with the results from the same test in a lab-based environment and estimate the usefulness of the internet as a platform for scaling studies.

  2. A quantitative approach to evaluate image quality of whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Kneepkens, R.; Vrijnsen, J.; Vossen, D.; Abels, E.; Hulsken, B.

    2016-01-01

    Context: The quality of images produced by whole slide imaging (WSI) scanners has a direct influence on the readers’ performance and reliability of the clinical diagnosis. Therefore, WSI scanners should produce not only high quality but also consistent quality images. Aim: We aim to evaluate reproducibility of WSI scanners based on the quality of images produced over time and among multiple scanners. The evaluation is independent of content or context of test specimen. Methods: The ultimate judge of image quality is a pathologist, however, subjective evaluations are heavily influenced by the complexity of a case and subtle variations introduced by a scanner can be easily overlooked. Therefore, we employed a quantitative image quality assessment method based on clinically relevant parameters, such as sharpness and brightness, acquired in a survey of pathologists. The acceptable level of quality per parameter was determined in a subjective study. The evaluation of scanner reproducibility was conducted with Philips Ultra-Fast Scanners. A set of 36 HercepTest™ slides were used in three sub-studies addressing variations due to systems and time, producing 8640 test images for evaluation. Results: The results showed that the majority of images in all the sub-studies are within the acceptable quality level; however, some scanners produce higher quality images more often than others. The results are independent of case types, and they match our perception of quality. Conclusion: The quantitative image quality assessment method was successfully applied in the HercepTest™ slides to evaluate WSI scanner reproducibility. The proposed method is generic and applicable to any other types of slide stains and scanners. PMID:28197359

  3. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  4. Sci—Fri AM: Mountain — 02: A comparison of dose reduction methods on image quality for cone beam CT

    SciTech Connect

    Webb, R; Buckley, LA

    2014-08-15

    Modern radiotherapy uses highly conformai dose distributions and therefore relies on daily image guidance for accurate patient positioning. Kilovoltage cone beam CT is one technique that is routinely used for patient set-up and results in a high dose to the patient relative to planar imaging techniques. This study uses an Elekta Synergy linac equipped with XVI cone beam CT to investigate the impact of various imaging parameters on dose and image quality. Dose and image quality are assessed as functions of x-ray tube voltage, tube current and the number of projections in the scan. In each case, the dose measurements confirm that as each parameter increases the dose increases. The assessment of high contrast resolution shows little dependence on changes to the image technique. However, low contrast visibility suggests a trade off between dose and image quality. Particularly for changes in tube potential, the dose increases much faster as a function of voltage than the corresponding increase in low contrast image quality. This suggests using moderate values of the peak tube voltage (100 – 120 kVp) since higher values result in significant dose increases with little gain in image quality. Measurements also indicate that increasing tube current achieves the greatest degree of improvement in the low contrast visibility. The results of this study highlight the need to establish careful imaging protocols to limit dose to the patient and to limit changes to the imaging parameters to those cases where there is a clear clinical requirement for improved image quality.

  5. Automatic detection of retina disease: robustness to image quality and localization of anatomy structure.

    PubMed

    Karnowski, T P; Aykac, D; Giancardo, L; Li, Y; Nichols, T; Tobin, K W; Chaum, E

    2011-01-01

    The automated detection of diabetic retinopathy and other eye diseases in images of the retina has great promise as a low-cost method for broad-based screening. Many systems in the literature which perform automated detection include a quality estimation step and physiological feature detection, including the vascular tree and the optic nerve / macula location. In this work, we study the robustness of an automated disease detection method with respect to the accuracy of the optic nerve location and the quality of the images obtained as judged by a quality estimation algorithm. The detection algorithm features microaneurysm and exudate detection followed by feature extraction on the detected population to describe the overall retina image. Labeled images of retinas ground-truthed to disease states are used to train a supervised learning algorithm to identify the disease state of the retina image and exam set. Under the restrictions of high confidence optic nerve detections and good quality imagery, the system achieves a sensitivity and specificity of 94.8% and 78.7% with area-under-curve of 95.3%. Analysis of the effect of constraining quality and the distinction between mild non-proliferative diabetic retinopathy, normal retina images, and more severe disease states is included.

  6. Raman chemical imaging technology for food safety and quality evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Raman chemical imaging combines Raman spectroscopy and digital imaging to visualize composition and morphology of a target. This technique offers great potential for food safety and quality research. Most commercial Raman instruments perform measurement at microscopic level, and the spatial range ca...

  7. What Is Quality Education? How Can It Be Achieved? The Perspectives of School Middle Leaders in Singapore

    ERIC Educational Resources Information Center

    Ng, Pak Tee

    2015-01-01

    This paper presents the findings of a research project that examines how middle leaders in Singapore schools understand "quality education" and how they think quality education can be achieved. From the perspective of these middle leaders, quality education emphasises holistic development, equips students with the knowledge and skills…

  8. Digital image quality measurements by objective and subjective methods from series of parametrically degraded images

    NASA Astrophysics Data System (ADS)

    Tachó, Aura; Mitjà, Carles; Martínez, Bea; Escofet, Jaume; Ralló, Miquel

    2013-11-01

    Many digital image applications like digitization of cultural heritage for preservation purposes operate with compressed files in one or more image observing steps. For this kind of applications JPEG compression is one of the most widely used. Compression level, final file size and quality loss are parameters that must be managed optimally. Although this loss can be monitored by means of objective image quality measurements, the real challenge is to know how it can be related with the perceived image quality by observers. A pictorial image has been degraded by two different procedures. The first, applying different levels of low pass filtering by convolving the image with progressively broad Gauss kernels. The second, saving the original file to a series of JPEG compression levels. In both cases, the objective image quality measurement is done by analysis of the image power spectrum. In order to obtain a measure of the perceived image quality, both series of degraded images are displayed on a computer screen organized in random pairs. The observers are compelled to choose the best image of each pair. Finally, a ranking is established applying Thurstone scaling method. Results obtained by both measurements are compared between them and with other objective measurement method as the Slanted Edge Test.

  9. Impact of contact lens zone geometry and ocular optics on bifocal retinal image quality

    PubMed Central

    Bradley, Arthur; Nam, Jayoung; Xu, Renfeng; Harman, Leslie; Thibos, Larry

    2014-01-01

    Purpose To examine the separate and combined influences of zone geometry, pupil size, diffraction, apodisation and spherical aberration on the optical performance of concentric zonal bifocals. Methods Zonal bifocal pupil functions representing eye + ophthalmic correction were defined by interleaving wavefronts from separate optical zones of the bifocal. A two-zone design (a central circular inner zone surrounded by an annular outer-zone which is bounded by the pupil) and a five-zone design (a central small circular zone surrounded by four concentric annuli) were configured with programmable zone geometry, wavefront phase and pupil transmission characteristics. Using computational methods, we examined the effects of diffraction, Stiles Crawford apodisation, pupil size and spherical aberration on optical transfer functions for different target distances. Results Apodisation alters the relative weighting of each zone, and thus the balance of near and distance optical quality. When spherical aberration is included, the effective distance correction, add power and image quality depend on zone-geometry and Stiles Crawford Effect apodisation. When the outer zone width is narrow, diffraction limits the available image contrast when focused, but as pupil dilates and outer zone width increases, aberrations will limit the best achievable image quality. With two-zone designs, balancing near and distance image quality is not achieved with equal area inner and outer zones. With significant levels of spherical aberration, multi-zone designs effectively become multifocals. Conclusion Wave optics and pupil varying ocular optics significantly affect the imaging capabilities of different optical zones of concentric bifocals. With two-zone bifocal designs, diffraction, pupil apodisation spherical aberration, and zone size influence both the effective add power and the pupil size required to balance near and distance image quality. Five-zone bifocal designs achieve a high degree of

  10. Achieving High Spatial Resolution Surface Plasmon Resonance Microscopy with Image Reconstruction.

    PubMed

    Yu, Hui; Shan, Xiaonan; Wang, Shaopeng; Tao, Nongjian

    2017-03-07

    Surface plasmon resonance microscopy (SPRM) is a powerful platform for biomedical imaging and molecular binding kinetics analysis. However, the spatial resolution of SPRM along the plasmon propagation direction (longitudinal) is determined by the decaying length of the plasmonic wave, which can be as large as tens of microns. Different methods have been proposed to improve the spatial resolution, but each at the expense of decreased sensitivity or temporal resolution. Here we present a method to achieve high spatial resolution SPRM based on deconvolution of complex field. The method does not require additional optical setup and improves the spatial resolution in the longitudinal direction. We applied the method to image nanoparticles and achieved close-to-diffraction limit resolution in both longitudinal and transverse directions.

  11. Optimizing CT radiation dose based on patient size and image quality: the size-specific dose estimate method.

    PubMed

    Larson, David B

    2014-10-01

    The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach.

  12. Multiple-image encryption based on triple interferences for flexibly decrypting high-quality images.

    PubMed

    Li, Wei-Na; Phan, Anh-Hoang; Piao, Mei-Lan; Kim, Nam

    2015-04-10

    We propose a multiple-image encryption (MIE) scheme based on triple interferences for flexibly decrypting high-quality images. Each image is discretionarily deciphered without decrypting a series of other images earlier. Since it does not involve any cascaded encryption orders, the image can be decrypted flexibly by using the novel method. Computer simulation demonstrated that the proposed method's running time is less than approximately 1/4 that of the previous similar MIE method. Moreover, the decrypted image is perfectly correlated with the original image, and due to many phase functions serving as decryption keys, this method is more secure and robust.

  13. Image science and image-quality research in the Optical Sciences Center

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Myers, Kyle J.

    2014-09-01

    This paper reviews the history of research into imaging and image quality at the Optical Sciences Center (OSC), with emphasis on the period 1970-1990. The work of various students in the areas of psychophysical studies of human observers of images; mathematical model observers; image simulation and analysis, and the application of these methods to radiology and nuclear medicine is summarized. The rapid progress in computational power, at OSC and elsewhere, which enabled the steady advances in imaging and the emergence of a science of imaging, is also traced. The implications of these advances to ongoing research and the current Image Science curriculum at the College of Optical Sciences are discussed.

  14. The effect of image quality and forensic expertise in facial image comparisons.

    PubMed

    Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice

    2015-03-01

    Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness.

  15. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1984-01-01

    Methods were developed for estimating point spread functions from image data. Roads and bridges in dark backgrounds are being examined as well as other smoothing methods for reducing noise in the estimated point spread function. Tomographic techniques were used to estimate two dimensional point spread functions. Reformatting software changes were implemented to handle formats for LANDSAT-5 data.

  16. A Dynamic Image Quality Evaluation of Videofluoroscopy Images: Considerations for Telepractice Applications.

    PubMed

    Burns, Clare L; Keir, Benjamin; Ward, Elizabeth C; Hill, Anne J; Farrell, Anna; Phillips, Nick; Porter, Linda

    2015-08-01

    High-quality fluoroscopy images are required for accurate interpretation of videofluoroscopic swallow studies (VFSS) by speech pathologists and radiologists. Consequently, integral to developing any system to conduct VFSS remotely via telepractice is ensuring that the quality of the VFSS images transferred via the telepractice system is optimized. This study evaluates the extent of change observed in image quality when videofluoroscopic images are transmitted from a digital fluoroscopy system to (a) current clinical equipment (KayPentax Digital Swallowing Workstation, and b) four different telepractice system configurations. The telepractice system configurations consisted of either a local C20 or C60 Cisco TelePresence System (codec unit) connected to the digital fluoroscopy system and linked to a second remote C20 or C60 Cisco TelePresence System via a network running at speeds of either 2, 4 or 6 megabits per second (Mbit/s). Image quality was tested using the NEMA XR 21 Phantom, and results demonstrated some loss in spatial resolution, low contrast detectability and temporal resolution for all transferred images when compared to the fluoroscopy source. When using higher capacity codec units and/or the highest bandwidths to support data transmission, image quality transmitted through the telepractice system was found to be comparable if not better than the current clinical system. This study confirms that telepractice systems can be designed to support fluoroscopy image transfer and highlights important considerations when developing telepractice systems for VFSS analysis to ensure high-quality radiological image reproduction.

  17. APQ-102 imaging radar digital image quality study

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1982-01-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  18. Quality limiting factors of imaging endoscopes based on optical fiber bundles

    NASA Astrophysics Data System (ADS)

    Ortega-Quijano, N.; Arce-Diego, J. L.; Fanjul-Vélez, F.

    2008-04-01

    Nowadays, imaging endoscopes have a key role in medicine, for diagnostic, treatment and surgical applications. Coherent optical fiber bundles used for medical imaging show flexibility and a high active area, but they entail two main quality-limiting factors: leaky modes and crosstalk or interference between the optical fibers of the bundle. The former provokes a worsening of lateral resolution, while the latter causes a decrease in the contrast of the final image. In this work, both factors are studied in detail. We analyse the main characteristics of these effects, showing the limitations they impose to the endoscopic system. Finally, some solutions are proposed, and a method for determining optical fibers with the appropriate opto-geometrical parameters is presented in order to achieve an optimum design and improve the image quality of the endoscope.

  19. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  20. Validation of no-reference image quality index for the assessment of digital mammographic images

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.

    2016-03-01

    To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity

  1. Principles of CT: radiation dose and image quality.

    PubMed

    Goldman, Lee W

    2007-12-01

    This article discusses CT radiation dose, the measurement of CT dose, and CT image quality. The most commonly used dose descriptor is CT dose index, which represents the dose to a location (e.g., depth) in a scanned volume from a complete series of slices. A weighted average of the CT dose index measured at the center and periphery of dose phantoms provides a convenient single-number estimate of patient dose for a procedure, and this value (or a related indicator that includes the scanned length) is often displayed on the operator's console. CT image quality, as in most imaging, is described in terms of contrast, spatial resolution, image noise, and artifacts. A strength of CT is its ability to visualize structures of low contrast in a subject, a task that is limited primarily by noise and is therefore closely associated with radiation dose: The higher the dose contributing to the image, the less apparent is image noise and the easier it is to perceive low-contrast structures. Spatial resolution is ultimately limited by sampling, but both image noise and resolution are strongly affected by the reconstruction filter. As a result, diagnostically acceptable image quality at acceptable doses of radiation requires appropriately designed clinical protocols, including appropriate kilovolt peaks, amperages, slice thicknesses, and reconstruction filters.

  2. High dynamic range image compression by optimizing tone mapped image quality index.

    PubMed

    Ma, Kede; Yeganeh, Hojatollah; Zeng, Kai; Wang, Zhou

    2015-10-01

    Tone mapping operators (TMOs) aim to compress high dynamic range (HDR) images to low dynamic range (LDR) ones so as to visualize HDR images on standard displays. Most existing TMOs were demonstrated on specific examples without being thoroughly evaluated using well-designed and subject-validated image quality assessment models. A recently proposed tone mapped image quality index (TMQI) made one of the first attempts on objective quality assessment of tone mapped images. Here, we propose a substantially different approach to design TMO. Instead of using any predefined systematic computational structure for tone mapping (such as analytic image transformations and/or explicit contrast/edge enhancement), we directly navigate in the space of all images, searching for the image that optimizes an improved TMQI. In particular, we first improve the two building blocks in TMQI—structural fidelity and statistical naturalness components—leading to a TMQI-II metric. We then propose an iterative algorithm that alternatively improves the structural fidelity and statistical naturalness of the resulting image. Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality tone mapped images even when the initial images of the iteration are created by the most competitive TMOs. Meanwhile, these results also validate the superiority of TMQI-II over TMQI.

  3. Nutrients, Water Temperature, and Dissolved Oxygen: Are Water Quality Standards Achievable for Forest Streams?

    NASA Astrophysics Data System (ADS)

    Ice, G. G.

    2002-12-01

    Water quality standards provide a performance measure for watershed managers. Three of the most important standards for rivers and streams are the key nutrients, nitrogen and phosphorus; water temperature; and dissolved oxygen. The concentration of nitrogen and phosphorus in waterbodies affects primary production and productivity. Too little nutrients and streams are sterile and unproductive. Too much and they are eutrophic. Water temperature is important because it influences chemical reaction rates in streams and metabolic rates in fish. Dissolved oxygen is necessary for respiration. Salmon, the focus of much of the conservation efforts in the Northwest, are known as organisms that require cool, highly oxygenated water to thrive. Still, it is important when setting a performance standard to determine if those standards are achievable. A survey of nutrient data for small forested streams has found that the ecoregion guidelines proposed by EPA are often unachievable, sometimes even for small, unmanaged reference watersheds. A pilot survey of water temperatures in Oregon wilderness areas and least impaired watersheds has found temperatures frequently exceed the state standards. While natural temperature exceedances are addressed in the water quality standards for Oregon for unmanaged watersheds, these temperatures for managed watersheds might be presumed to result from management activities, precipitating an expensive Total Maximum Daily Load (TMDL) assessment. Less is known about dissolved oxygen for small forest streams because work 20 years ago showed little risk of significant dissolved oxygen concentrations where shade was maintained near the stream and fine slash was kept out of the stream. However, work from the 1970's on intergravel dissolved oxygen also shows that stream with greater large woody debris (LWD) can have lower intergravel dissolved oxygen concentrations, presumably due to trapping of fine organic and inorganic materials. Efforts to add LWD to

  4. Visualization of Deformable Image Registration Quality using Local Image Dissimilarity.

    PubMed

    Schlachter, Matthias; Fechter, Tobias; Jurisic, Miro; Schimek-Jasch, Tanja; Oehlke, Oliver; Adebahr, Sonja; Birkfellner, Wolfgang; Nestle, Ursula; Buhler, Katja

    2016-04-29

    Deformable image registration (DIR) has the potential to improve modern radiotherapy in many aspects, including volume definition, treatment planning and image-guided adaptive radiotherapy. Studies have shown its possible clinical benefits. However, measuring DIR accuracy is difficult without known ground truth, but necessary before integration in the radiotherapy workflow. Visual assessment is an important step towards clinical acceptance. We propose a visualization framework which supports the exploration and the assessment of DIR accuracy. It offers different interaction and visualization features for exploration of candidate regions to simplify the process of visual assessment. The visualization is based on voxel-wise comparison of local image patches for which dissimilarity measures are computed and visualized to indicate locally the registration results. We performed an evaluation with three radiation oncologists to demonstrate the viability of our approach. In the evaluation, lung regions were rated by the participants with regards to their visual accuracy and compared to the registration error measured with expert defined landmarks. Regions rated as "accepted" had an average registration error of 1.8 mm, with the highest single landmark error being 3.3 mm. Additionally, survey results show that the proposed visualizations support a fast and intuitive investigation of DIR accuracy, and are suitable for finding even small errors.

  5. Visualization of Deformable Image Registration Quality Using Local Image Dissimilarity.

    PubMed

    Schlachter, Matthias; Fechter, Tobias; Jurisic, Miro; Schimek-Jasch, Tanja; Oehlke, Oliver; Adebahr, Sonja; Birkfellner, Wolfgang; Nestle, Ursula; Bu Hler, Katja

    2016-10-01

    Deformable image registration (DIR) has the potential to improve modern radiotherapy in many aspects, including volume definition, treatment planning and image-guided adaptive radiotherapy. Studies have shown its possible clinical benefits. However, measuring DIR accuracy is difficult without known ground truth, but necessary before integration in the radiotherapy workflow. Visual assessment is an important step towards clinical acceptance. We propose a visualization framework which supports the exploration and the assessment of DIR accuracy. It offers different interaction and visualization features for exploration of candidate regions to simplify the process of visual assessment. The visualization is based on voxel-wise comparison of local image patches for which dissimilarity measures are computed and visualized to indicate locally the registration results. We performed an evaluation with three radiation oncologists to demonstrate the viability of our approach. In the evaluation, lung regions were rated by the participants with regards to their visual accuracy and compared to the registration error measured with expert defined landmarks. Regions rated as "accepted" had an average registration error of 1.8 mm, with the highest single landmark error being 3.3 mm. Additionally, survey results show that the proposed visualizations support a fast and intuitive investigation of DIR accuracy, and are suitable for finding even small errors.

  6. Achieving High Contrast for Exoplanet Imaging with a Kalman Filter and Stroke Minimization

    NASA Astrophysics Data System (ADS)

    Eldorado Riggs, A. J.; Groff, T. D.; Kasdin, N. J.; Carlotti, A.; Vanderbei, R. J.

    2014-01-01

    High contrast imaging requires focal plane wavefront control and estimation to correct aberrations in an optical system; non-common path errors prevent the use of conventional estimation with a separate wavefront sensor. The High Contrast Imaging Laboratory (HCIL) at Princeton has led the development of several techniques for focal plane wavefront control and estimation. In recent years, we developed a Kalman filter for optimal wavefront estimation. Our Kalman filter algorithm is an improvement upon DM Diversity, which requires at least two images pairs each iteration and does not utilize any prior knowledge of the system. The Kalman filter is a recursive estimator, meaning that it uses the data from prior estimates along with as few as one new image pairs per iteration to update the electric field estimate. Stroke minimization has proven to be a feasible controller for achieving high contrast. While similar to a variation of Electric Field Conjugation (EFC), stroke minimization achieves the same contrast with less stroke on the DMs. We recently utilized these algorithms to achieve high contrast for the first time in our experiment at the High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory (JPL). Our HCIT experiment was also the first demonstration of symmetric dark hole correction in the image plane using two DMs--this is a major milestone for future space missions. Our ongoing work includes upgrading our optimal estimator to include an estimate of the incoherent light in the system, which allows for simultaneous estimation of the light from a planet along with starlight. The two-DM experiment at the HCIT utilized a shaped pupil coronagraph. Those tests utilized ripple style, free-standing masks etched out of silicon, but our current work is in designing 2-D optimized reflective shaped pupils. In particular, we have created several designs for the AFTA telescope, whose pupil presents major hurdles because of its atypical pupil obstructions. Our

  7. Body image and quality of life in a Spanish population

    PubMed Central

    Lobera, Ignacio Jáuregui; Ríos, Patricia Bolaños

    2011-01-01

    Purpose The aim of the current study was to analyze the psychometric properties, factor structure, and internal consistency of the Spanish version of the Body Image Quality of Life Inventory (BIQLI-SP) as well as its test–retest reliability. Further objectives were to analyze different relationships with key dimensions of psychosocial functioning (ie, self-esteem, presence of psychopathological symptoms, eating and body image-related problems, and perceived stress) and to evaluate differences in body image quality of life due to gender. Patients and methods The sample comprised 417 students without any psychiatric history, recruited from the Pablo de Olavide University and the University of Seville. There were 140 men (33.57%) and 277 women (66.43%), and the mean age was 21.62 years (standard deviation = 5.12). After obtaining informed consent from all participants, the following questionnaires were administered: BIQLI, Eating Disorder Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results The BIQLI-SP shows adequate psychometric properties, and it may be useful to determine the body image quality of life in different physical conditions. A more positive body image quality of life is associated with better self-esteem, better psychological wellbeing, and fewer eating-related dysfunctional attitudes, this being more evident among women. Conclusion The BIQLI-SP may be useful to determine the body image quality of life in different contexts with regard to dermatology, cosmetic and reconstructive surgery, and endocrinology, among others. In these fields of study, a new trend has emerged to assess body image-related quality of life. PMID:21403794

  8. Teacher-student relationship quality type in elementary grades: Effects on trajectories for achievement and engagement.

    PubMed

    Wu, Jiun-Yu; Hughes, Jan N; Kwok, Oi-Man

    2010-10-01

    Teacher, peer, and student reports of the quality of the teacher-student relationship were obtained for an ethnically diverse and academically at-risk sample of 706 second- and third-grade students. Cluster analysis identified four types of relationships based on the consistency of child reports of support and conflict in the relationship with reports of others: Congruent Positive, Congruent Negative, Incongruent Child Negative, and Incongruent Child Positive. The cluster solution evidenced good internal consistency and construct validity. Group membership predicted growth trajectories for teacher-rated engagement and standardized achievement scores over the following three years, above prior performance. The predictive associations between child reports of teacher support and conflict and the measured outcomes depended on whether child reports were consistent or inconsistent with reports of others. Study findings have implications for theory development, assessment of teacher-student relationships, and teacher professional development.

  9. Charting the course for home health care quality: action steps for achieving sustainable improvement: conference proceedings.

    PubMed

    Feldman, Penny Hollander; Peterson, Laura E; Reische, Laurie; Bruno, Lori; Clark, Amy

    2004-12-01

    On June 30 and July 1, 2003, the first national meeting Charting the Course for Home Health Care Quality: Action Steps for Achieving Sustainable Improvement convened in New York City. The Center for Home Care Policy & Research of the Visiting Nurse Service of New York (VNSNY) hosted the meeting with support from the Robert Wood Johnson Foundation. Fifty-seven attendees from throughout the United States participated. The participants included senior leaders and managers and nurses working directly in home care today. The meeting's objectives were to: 1. foster dialogue among key constituents influencing patient safety and home care, 2. promote information-sharing across sectors and identify areas where more information is needed, and, 3. develop an agenda and strategy for moving forward. This article reports the meeting's proceedings.

  10. TH-C-18A-06: Combined CT Image Quality and Radiation Dose Monitoring Program Based On Patient Data to Assess Consistency of Clinical Imaging Across Scanner Models

    SciTech Connect

    Christianson, O; Winslow, J; Samei, E

    2014-06-15

    Purpose: One of the principal challenges of clinical imaging is to achieve an ideal balance between image quality and radiation dose across multiple CT models. The number of scanners and protocols at large medical centers necessitates an automated quality assurance program to facilitate this objective. Therefore, the goal of this work was to implement an automated CT image quality and radiation dose monitoring program based on actual patient data and to use this program to assess consistency of protocols across CT scanner models. Methods: Patient CT scans are routed to a HIPPA compliant quality assurance server. CTDI, extracted using optical character recognition, and patient size, measured from the localizers, are used to calculate SSDE. A previously validated noise measurement algorithm determines the noise in uniform areas of the image across the scanned anatomy to generate a global noise level (GNL). Using this program, 2358 abdominopelvic scans acquired on three commercial CT scanners were analyzed. Median SSDE and GNL were compared across scanner models and trends in SSDE and GNL with patient size were used to determine the impact of differing automatic exposure control (AEC) algorithms. Results: There was a significant difference in both SSDE and GNL across scanner models (9–33% and 15–35% for SSDE and GNL, respectively). Adjusting all protocols to achieve the same image noise would reduce patient dose by 27–45% depending on scanner model. Additionally, differences in AEC methodologies across vendors resulted in disparate relationships of SSDE and GNL with patient size. Conclusion: The difference in noise across scanner models indicates that protocols are not optimally matched to achieve consistent image quality. Our results indicated substantial possibility for dose reduction while achieving more consistent image appearance. Finally, the difference in AEC methodologies suggests the need for size-specific CT protocols to minimize variability in image

  11. Taking advantage of ground data systems attributes to achieve quality results in testing software

    NASA Technical Reports Server (NTRS)

    Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.

    1994-01-01

    During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.

  12. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  13. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  14. Investigation of perceptual attributes for mobile display image quality

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Xu, Haisong; Wang, Qing; Wang, Zhehong; Li, Haifeng

    2013-08-01

    Large-scale psychophysical experiments are carried out on two types of mobile displays to evaluate the perceived image quality (IQ). Eight perceptual attributes, i.e., naturalness, colorfulness, brightness, contrast, sharpness, clearness, preference, and overall IQ, are visually assessed via categorical judgment method for various application types of test images, which were manipulated by different methods. Their correlations are deeply discussed, and further factor analysis revealed the two essential components to describe the overall IQ, i.e., the component of image detail aspect and the component of color information aspect. Clearness and naturalness are regarded as two principal factors for natural scene images, whereas clearness and colorfulness were selected as key attributes affecting the overall IQ for other application types of images. Accordingly, based on these selected attributes, two kinds of empirical models are built to predict the overall IQ of mobile displays for different application types of images.

  15. Exploratory survey of image quality on CR digital mammography imaging systems in Mexico.

    PubMed

    Gaona, E; Rivera, T; Arreola, M; Franco, J; Molina, N; Alvarez, B; Azorín, C G; Casian, G

    2014-01-01

    The purpose of this study was to assess the current status of image quality and dose in computed radiographic digital mammography (CRDM) systems. Studies included CRDM systems of various models and manufacturers which dose and image quality comparisons were performed. Due to the recent rise in the use of digital radiographic systems in Mexico, CRDM systems are rapidly replacing conventional film-screen systems without any regard to quality control or image quality standards. Study was conducted in 65 mammography facilities which use CRDM systems in the Mexico City and surrounding States. The systems were tested as used clinically. This means that the dose and beam qualities were selected using the automatic beam selection and photo-timed features. All systems surveyed generate laser film hardcopies for the radiologist to read on a scope or mammographic high luminance light box. It was found that 51 of CRDM systems presented a variety of image artefacts and non-uniformities arising from inadequate acquisition and processing, as well as from the laser printer itself. Undisciplined alteration of image processing settings by the technologist was found to be a serious prevalent problem in 42 facilities. Only four of them showed an image QC program which is periodically monitored by a medical physicist. The Average Glandular Dose (AGD) in the surveyed systems was estimated to have a mean value of 2.4 mGy. To improve image quality in mammography and make more efficient screening mammographic in early detection of breast cancer is required new legislation.

  16. Advanced imaging assessment of bone quality.

    PubMed

    Genant, Harry K; Jiang, Yebin

    2006-04-01

    Noninvasive and/or nondestructive techniques can provide structural information about bone, beyond simple bone densitometry. While the latter provides important information about osteoporotic fracture risk, many studies indicate that bone mineral density (BMD) only partly explains bone strength. Quantitative assessment of macrostructural characteristics, such as geometry, and microstructural features, such as relative trabecular volume, trabecular spacing, and connectivity, may improve our ability to estimate bone strength. Methods for quantitatively assessing macrostructure include (besides conventional radiographs) dual X ray absorptiometry (DXA) and computed tomography (CT), particularly volumetric quantitative computed tomography (vQCT). Methods for assessing microstructure of trabecular bone noninvasively and/or nondestructively include high-resolution computed tomography (hrCT), microcomputed tomography (micro-CT), high-resolution magnetic resonance (hrMR), and micromagnetic resonance (micro-MR). vQCT, hrCT, and hrMR are generally applicable in vivo; micro-CT and micro-MR are principally applicable in vitro. Despite progress, problems remain. The important balances between spatial resolution and sampling size, or between signal-to-noise and radiation dose or acquisition time, need further consideration, as do the complexity and expense of the methods versus their availability and accessibility. Clinically, the challenges for bone imaging include balancing the advantages of simple bone densitometry versus the more complex architectural features of bone, or the deeper research requirements versus the broader clinical needs. The biological differences between the peripheral appendicular skeleton and the central axial skeleton must be further addressed. Finally, the relative merits of these sophisticated imaging techniques must be weighed with respect to their applications as diagnostic procedures, requiring high accuracy or reliability, versus their monitoring

  17. LANDSAT 4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1983-01-01

    A comparative analysis of TM and MSS data was completed and the results indicate that there are half as many separable spectral classes in the MSS data than in TM. In addition, the minimum separability between classes was also much less in MSS data. Radiometric data quality was also investigated for the TM by computing power spectrum estimates for dark-level data from Lake Michigan. Two significant coherent noise frequencies were observed, one with a wavelength of 3.12 pixels and the other with a 17 pixel wavelength. The amplitude was small (nominally .6 digital count standard deviation) and the noise appears primarily in Bands 3 and 4. No significant levels were observed in other bands. Scan angle dependent brightness effects were also evaluated.

  18. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  19. Effects of sparse sampling schemes on image quality in low-dose CT

    SciTech Connect

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-11-15

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  20. Determination of pasture quality using airborne hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Pullanagari, R. R.; Kereszturi, G.; Yule, Ian J.; Irwin, M. E.

    2015-10-01

    Pasture quality is a critical determinant which influences animal performance (live weight gain, milk and meat production) and animal health. Assessment of pasture quality is therefore required to assist farmers with grazing planning and management, benchmarking between seasons and years. Traditionally, pasture quality is determined by field sampling which is laborious, expensive and time consuming, and the information is not available in real-time. Hyperspectral remote sensing has potential to accurately quantify biochemical composition of pasture over wide areas in great spatial detail. In this study an airborne imaging spectrometer (AisaFENIX, Specim) was used with a spectral range of 380-2500 nm with 448 spectral bands. A case study of a 600 ha hill country farm in New Zealand is used to illustrate the use of the system. Radiometric and atmospheric corrections, along with automatized georectification of the imagery using Digital Elevation Model (DEM), were applied to the raw images to convert into geocoded reflectance images. Then a multivariate statistical method, partial least squares (PLS), was applied to estimate pasture quality such as crude protein (CP) and metabolisable energy (ME) from canopy reflectance. The results from this study revealed that estimates of CP and ME had a R2 of 0.77 and 0.79, and RMSECV of 2.97 and 0.81 respectively. By utilizing these regression models, spatial maps were created over the imaged area. These pasture quality maps can be used for adopting precision agriculture practices which improves farm profitability and environmental sustainability.

  1. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  2. Nanoscopy—imaging life at the nanoscale: a Nobel Prize achievement with a bright future

    NASA Astrophysics Data System (ADS)

    Blom, Hans; Bates, Mark

    2015-10-01

    A grand scientific prize was awarded last year to three pioneering scientists, for their discovery and development of molecular ‘ON-OFF’ switching which, when combined with optical imaging, can be used to see the previously invisible with light microscopy. The Royal Swedish Academy of Science announced on October 8th their decision and explained that this achievement—rooted in physics and applied in biology and medicine—was awarded with the Nobel Prize in Chemistry for controlling fluorescent molecules to create images of specimens smaller than anything previously observed with light. The story of how this noble switch in optical microscopy was achieved and how it was engineered to visualize life at the nanoscale is highlighted in this invited comment.

  3. Radiation dose and image quality for paediatric interventional cardiology.

    PubMed

    Vano, E; Ubeda, C; Leyton, F; Miranda, P

    2008-08-07

    Radiation dose and image quality for paediatric protocols in a biplane x-ray system used for interventional cardiology have been evaluated. Entrance surface air kerma (ESAK) and image quality using a test object and polymethyl methacrylate (PMMA) phantoms have been measured for the typical paediatric patient thicknesses (4-20 cm of PMMA). Images from fluoroscopy (low, medium and high) and cine modes have been archived in digital imaging and communications in medicine (DICOM) format. Signal-to-noise ratio (SNR), figure of merit (FOM), contrast (CO), contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) have been computed from the images. Data on dose transferred to the DICOM header have been used to test the values of the dosimetric display at the interventional reference point. ESAK for fluoroscopy modes ranges from 0.15 to 36.60 microGy/frame when moving from 4 to 20 cm PMMA. For cine, these values range from 2.80 to 161.10 microGy/frame. SNR, FOM, CO, CNR and HCSR are improved for high fluoroscopy and cine modes and maintained roughly constant for the different thicknesses. Cumulative dose at the interventional reference point resulted 25-45% higher than the skin dose for the vertical C-arm (depending of the phantom thickness). ESAK and numerical image quality parameters allow the verification of the proper setting of the x-ray system. Knowing the increases in dose per frame when increasing phantom thicknesses together with the image quality parameters will help cardiologists in the good management of patient dose and allow them to select the best imaging acquisition mode during clinical procedures.

  4. Radiation dose and image quality for paediatric interventional cardiology

    NASA Astrophysics Data System (ADS)

    Vano, E.; Ubeda, C.; Leyton, F.; Miranda, P.

    2008-08-01

    Radiation dose and image quality for paediatric protocols in a biplane x-ray system used for interventional cardiology have been evaluated. Entrance surface air kerma (ESAK) and image quality using a test object and polymethyl methacrylate (PMMA) phantoms have been measured for the typical paediatric patient thicknesses (4-20 cm of PMMA). Images from fluoroscopy (low, medium and high) and cine modes have been archived in digital imaging and communications in medicine (DICOM) format. Signal-to-noise ratio (SNR), figure of merit (FOM), contrast (CO), contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) have been computed from the images. Data on dose transferred to the DICOM header have been used to test the values of the dosimetric display at the interventional reference point. ESAK for fluoroscopy modes ranges from 0.15 to 36.60 µGy/frame when moving from 4 to 20 cm PMMA. For cine, these values range from 2.80 to 161.10 µGy/frame. SNR, FOM, CO, CNR and HCSR are improved for high fluoroscopy and cine modes and maintained roughly constant for the different thicknesses. Cumulative dose at the interventional reference point resulted 25-45% higher than the skin dose for the vertical C-arm (depending of the phantom thickness). ESAK and numerical image quality parameters allow the verification of the proper setting of the x-ray system. Knowing the increases in dose per frame when increasing phantom thicknesses together with the image quality parameters will help cardiologists in the good management of patient dose and allow them to select the best imaging acquisition mode during clinical procedures.

  5. Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics

    NASA Astrophysics Data System (ADS)

    King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.

    2011-03-01

    The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.

  6. Are image quality metrics adequate to evaluate the quality of geometric objects?

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Rushmeier, Holly E.

    2001-06-01

    Geometric objects are often represented by many millions of triangles or polygons, which limits the ease with which they can be transmitted and displayed electronically. This has lead to the development of many algorithms for simplifying geometric models, and to the recognition that metrics are required to evaluate their success. The goal is to create computer graphic renderings of the object that do not appear to be degraded to a human observer. The perceptual evaluation of simplified objects is a new topic. One approach has been to sue image-based metrics to predict the perceived degradation of simplified 3D models. Since that 2D images of 3D objects can have significantly different perceived quality, depending on the direction of the illumination, 2D measures of image quality may not adequately capture the perceived quality of 3D objects. To address this question, we conducted experiments in which we explicitly compared the perceived quality of animated 3D objects and their corresponding 2D still image projections. Our results suggest that 2D judgements do not provide a good predictor of 3D image quality, and identify a need to develop 'object quality metrics.'

  7. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  8. Paediatric x-ray radiation dose reduction and image quality analysis.

    PubMed

    Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H

    2013-09-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.

  9. Patient doses and image quality in digital chest radiology.

    PubMed

    Salát, D; Nikodemová, D

    2008-01-01

    Chest X-ray examination is one of the most frequently required procedures used in clinical practice. For studying the image quality of different X-ray digital systems and for the control of patient doses during chest radiological examinations, the standard anthropomorphic lung/chest phantom RSD 330 has been used and exposed in different digital modalities available in Slovakia. To compare different techniques of chest examination, a special software has been developed that enables researchers to compare digital imaging and communications in medicine header images from different digital modalities, using a special viewer. In this paper, this special software has been used for an anonymous correspondent audit for testing image quality evaluation by comparing various parameters of chest imaging, evaluated by 84 Slovak radiologists. The results of the comparison have shown that the majority of the participating radiologists felt that the highest image quality is reached with a flat panel, assessed by the entrance surface dose value, which is approximately 75% lower than the diagnostic reference level of chest examination given in the Slovak legislation. Besides the results of the audit, the possibilities of using the software for optimisation, education and training of medical students, radiological assistants, physicists and radiologists in the field of digital radiology will be described.

  10. Evaluation of image quality and dose on a flat-panel CT-scanner

    NASA Astrophysics Data System (ADS)

    Grasruck, M.; Suess, Ch.; Stierstorfer, K.; Popescu, S.; Flohr, T.

    2005-04-01

    We developed and evaluated a prototype flat-panel detector based Volume CT (VCT) scanner. We focused on improving the image quality using different detector settings and reducing x-ray scatter intensities. For the presented results we used a Varian 4030CB flat-panel detector mounted in a multislice CT-gantry (Siemens Medical Systems). The scatter intensities may severely impair image quality in flat-panel detector CT systems. To reduce the impact of scatter we tested bowtie shaped filters, anti-scatter grids and post-processing correction algorithms. We evaluated the improvement of image quality by each method and also by a combination of the several methods. To achieve an extended dynamic range in the projection data, we implemented a novel dynamic gain-switching mode. The read out charge amplifier feedback capacitance is changing dynamically in this mode, depending on the signal level. For this scan mode dedicated corrections in the offset and gain calibration are required. We compared image quality in terms of low contrast for both, the dynamic mode and the standard fixed gain mode. VCT scanners require different types of dose parameters. We measured the dose in a 16 cm CTDI phantom and free air in the scanners iso-center and defined a new metric for a VCT dose index (VCTDI). The dose for a high quality VCT scan of this prototype scanner varied between 15 and 40 mGy.

  11. Quality assessment for multitemporal and multisensor image fusion

    NASA Astrophysics Data System (ADS)

    Ehlers, Manfred; Klonus, Sascha

    2008-10-01

    Generally, image fusion methods are classified into three levels: pixel level (iconic), feature level (symbolic) and knowledge or decision level. In this paper we focus on iconic techniques for image fusion. There exist a number of established fusion techniques that can be used to merge high spatial resolution panchromatic and lower spatial resolution multispectral images that are simultaneously recorded by one sensor. This is done to create high resolution multispectral image datasets (pansharpening). In most cases, these techniques provide very good results, i.e. they retain the high spatial resolution of the panchromatic image and the spectral information from the multispectral image. These techniques, when applied to multitemporal and/or multisensoral image data, still create spatially enhanced datasets but usually at the expense of the spectral consistency. In this study, a series of nine multitemporal multispectral remote sensing images (seven SPOT scenes and one FORMOSAT scene) is fused with one panchromatic Ikonos image. A number of techniques are employed to analyze the quality of the fusion process. The images are visually and quantitatively evaluated for spectral characteristics preservation and for spatial resolution improvement. Overall, the Ehlers fusion which was developed for spectral characteristics preservation for multi-date and multi-sensor fusion showed the best results. It could not only be proven that the Ehlers fusion is superior to all other tested algorithms but also the only one that guarantees an excellent color preservation for all dates and sensors.

  12. Assessing the quality of rainfall data when aiming to achieve flood resilience

    NASA Astrophysics Data System (ADS)

    Hoang, C. T.; Tchiguirinskaia, I.; Schertzer, D.; Lovejoy, S.

    2012-04-01

    A new EU Floods Directive entered into force five years ago. This Directive requires Member States to coordinate adequate measures to reduce flood risk. European flood management systems require reliable rainfall statistics, e.g. the Intensity-duration-Frequency curves for shorter and shorter durations and for a larger and larger range of return periods. Preliminary studies showed that the number of floods was lower when using low time resolution data of high intensity rainfall events, compared to estimates obtained with the help of higher time resolution data. These facts suggest that a particular attention should be paid to the rainfall data quality in order to adequately investigate flood risk aiming to achieve flood resilience. The potential consequences of changes in measuring and recording techniques have been somewhat discussed in the literature with respect to a possible introduction of artificial inhomogeneities in time series. In this paper, we discuss how to detect another artificiality: most of the rainfall time series have a lower recording frequency than that is assumed, furthermore the effective high-frequency limit often depends on the recording year due to algorithm changes. This question is particularly important for operational hydrology, because an error on the effective recording high frequency introduces biases in the corresponding statistics. In this direction, we developed a first version of a SERQUAL procedure to automatically detect the effective time resolution of highly mixed data. Being applied to the 166 rainfall time series in France, the SERQUAL procedure has detected that most of them have an effective hourly resolution, rather than a 5 minutes resolution. Furthermore, series having an overall 5 minute resolution do not have it for all years. These results raise serious concerns on how to benchmark stochastic rainfall models at a sub-hourly resolution, which are particularly desirable for operational hydrology. Therefore, database

  13. Why Is Quality in Higher Education Not Achieved? The View of Academics

    ERIC Educational Resources Information Center

    Cardoso, Sónia; Rosa, Maria J.; Stensaker, Bjørn

    2016-01-01

    Quality assurance is currently an established activity in Europe, driven either by national quality assurance agencies or by institutions themselves. However, whether quality assurance is perceived as actually being capable of promoting quality is still a question open to discussion. Based on three different views on quality derived from the…

  14. Automating PACS quality control with the Vanderbilt image processing enterprise resource

    NASA Astrophysics Data System (ADS)

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2012-02-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  15. Body image quality of life in eating disorders

    PubMed Central

    Jáuregui Lobera, Ignacio; Bolaños Ríos, Patricia

    2011-01-01

    Purpose: The objective was to examine how body image affects quality of life in an eating-disorder (ED) clinical sample, a non-ED clinical sample, and a nonclinical sample. We hypothesized that ED patients would show the worst body image quality of life. We also hypothesized that body image quality of life would have a stronger negative association with specific ED-related variables than with other psychological and psychopathological variables, mainly among ED patients. On the basis of previous studies, the influence of gender on the results was explored, too. Patients and methods: The final sample comprised 70 ED patients (mean age 22.65 ± 7.76 years; 59 women and 11 men); 106 were patients with other psychiatric disorders (mean age 28.20 ± 6.52; 67 women and 39 men), and 135 were university students (mean age 21.57 ± 2.58; 81 women and 54 men), with no psychiatric history. After having obtained informed consent, the following questionnaires were administered: Body Image Quality of Life Inventory-Spanish version (BIQLI-SP), Eating Disorders Inventory-2 (EDI-2), Perceived Stress Questionnaire (PSQ), Self-Esteem Scale (SES), and Symptom Checklist-90-Revised (SCL-90-R). Results: The ED patients’ ratings on the BIQLI-SP were the lowest and negatively scored (BIQLI-SP means: +20.18, +5.14, and −6.18, in the student group, the non-ED patient group, and the ED group, respectively). The effect of body image on quality of life was more negative in the ED group in all items of the BIQLI-SP. Body image quality of life was negatively associated with specific ED-related variables, more than with other psychological and psychopathological variables, but not especially among ED patients. Conclusion: Body image quality of life was affected not only by specific pathologies related to body image disturbances, but also by other psychopathological syndromes. Nevertheless, the greatest effect was related to ED, and seemed to be more negative among men. This finding is the

  16. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  17. A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.

    PubMed

    Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-01-01

    This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.

  18. Quality criteria for simulator images - A literature review

    NASA Astrophysics Data System (ADS)

    Padmos, Pieter; Milders, Maarten V.

    1992-12-01

    Quality criteria are presented for each of about 30 different outside-world image features of computer-generated image systems on vehicle simulators (e.g., airplane, tank, ship). Criteria derived are based on a literature review. In addition to purely physical properties related to image presentation (e.g., field size, contrast ratio, update frequency), attention is paid to image content (e.g., number of polygons, surface treatments, moving objects) and various other features (e.g., electro-optical aids, vehicle-terrain interactions, modeling tools, instruction tools). Included in this paper are an introduction on visual perception, separate discussions of each image feature including terminology definitions, and suggestions for further research.

  19. Magnetic Resonance Imaging (MRI) Analysis of Fibroid Location in Women Achieving Pregnancy After Uterine Artery Embolization

    SciTech Connect

    Walker, Woodruff J.; Bratby, Mark John

    2007-09-15

    The purpose of this study was to evaluate the fibroid morphology in a cohort of women achieving pregnancy following treatment with uterine artery embolization (UAE) for symptomatic uterine fibroids. A retrospective review of magnetic resonance imaging (MRI) of the uterus was performed to assess pre-embolization fibroid morphology. Data were collected on fibroid size, type, and number and included analysis of follow-up imaging to assess response. There have been 67 pregnancies in 51 women, with 40 live births. Intramural fibroids were seen in 62.7% of the women (32/48). Of these the fibroids were multiple in 16. A further 12 women had submucosal fibroids, with equal numbers of types 1 and 2. Two of these women had coexistent intramural fibroids. In six women the fibroids could not be individually delineated and formed a complex mass. All subtypes of fibroid were represented in those subgroups of women achieving a live birth versus those who did not. These results demonstrate that the location of uterine fibroids did not adversely affect subsequent pregnancy in the patient population investigated. Although this is only a small qualitative study, it does suggest that all types of fibroids treated with UAE have the potential for future fertility.

  20. Does higher quality early child care promote low-income children's math and reading achievement in middle childhood?

    PubMed

    Dearing, Eric; McCartney, Kathleen; Taylor, Beck A

    2009-01-01

    Higher quality child care during infancy and early childhood (6-54 months of age) was examined as a moderator of associations between family economic status and children's (N = 1,364) math and reading achievement in middle childhood (4.5-11 years of age). Low income was less strongly predictive of underachievement for children who had been in higher quality care than for those who had not. Consistent with a cognitive advantage hypothesis, higher quality care appeared to promote achievement indirectly via early school readiness skills. Family characteristics associated with selection into child care also appeared to promote the achievement of low-income children, but the moderating effect of higher quality care per se remained evident when controlling for selection using covariates and propensity scores.

  1. Quality assurance methodology and applications to abdominal imaging PQI.

    PubMed

    Paushter, David M; Thomas, Stephen

    2016-03-01

    Quality assurance has increasingly become an integral part of medicine, with tandem goals of increasing patient safety and procedural quality, improving efficiency, lowering cost, and ultimately improving patient outcomes. This article reviews quality assurance methodology, ranging from the PDSA cycle to the application of lean techniques, aimed at operational efficiency, to continually evaluate and revise the health care environment. Alignment of goals for practices, hospitals, and healthcare organizations is critical, requiring clear objectives, adequate resources, and transparent reporting. In addition, there is a significant role played by regulatory bodies and oversight organizations in determining external benchmarks of quality, practice, and individual certification and reimbursement. Finally, practical application of quality principles to practice improvement projects in abdominal imaging will be presented.

  2. TU-EF-204-02: Hiigh Quality and Sub-MSv Cerebral CT Perfusion Imaging

    SciTech Connect

    Li, Ke; Niu, Kai; Wu, Yijing; Chen, Guang-Hong

    2015-06-15

    Purpose: CT Perfusion (CTP) imaging is of great importance in acute ischemic stroke management due to its potential to detect hypoperfused yet salvageable tissue and distinguish it from definitely unsalvageable tissue. However, current CTP imaging suffers from poor image quality and high radiation dose (up to 5 mSv). The purpose of this work was to demonstrate that technical innovations such as Prior Image Constrained Compressed Sensing (PICCS) have the potential to address these challenges and achieve high quality and sub-mSv CTP imaging. Methods: (1) A spatial-temporal 4D cascaded system model was developed to indentify the bottlenecks in the current CTP technology; (2) A task-based framework was developed to optimize the CTP system parameters; (3) Guided by (1) and (2), PICCS was customized for the reconstruction of CTP source images. Digital anthropomorphic perfusion phantoms, animal studies, and preliminary human subject studies were used to validate and evaluate the potentials of using these innovations to advance the CTP technology. Results: The 4D cascaded model was validated in both phantom and canine stroke models. Based upon this cascaded model, it has been discovered that, as long as the spatial resolution and noise properties of the 4D source CT images are given, the 3D MTF and NPS of the final CTP maps can be analytically derived for a given set of processing methods and parameters. The cascaded model analysis also identified that the most critical technical factor in CTP is how to acquire and reconstruct high quality source images; it has very little to do with the denoising techniques often used after parametric perfusion calculations. This explained why PICCS resulted in a five-fold dose reduction or substantial improvement in image quality. Conclusion: Technical innovations generated promising results towards achieving high quality and sub-mSv CTP imaging for reliable and safe assessment of acute ischemic strokes. K. Li, K. Niu, Y. Wu: Nothing to

  3. Image quality, space-qualified UV interference filters

    NASA Technical Reports Server (NTRS)

    Mooney, Thomas A.

    1992-01-01

    The progress during the contract period is described. The project involved fabrication of image quality, space-qualified bandpass filters in the 200-350 nm spectral region. Ion-assisted deposition (IAD) was applied to produce stable, reasonably durable filter coatings on space compatible UV substrates. Thin film materials and UV transmitting substrates were tested for resistance to simulated space effects.

  4. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  5. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  6. A Novel Image Quality Assessment with Globally and Locally Consilient Visual Quality Perception.

    PubMed

    Bae, Sung-Ho; Kim, Munchurl

    2016-03-25

    Computational models for image quality assessment (IQA) have been developed by exploring effective features that are consistent with the characteristics of human visual system (HVS) for visual quality perception. In this paper, we firstly reveal that many existing features used in computational IQA methods can hardly characterize visual quality perception for local image characteristics and various distortion types. To solve this problem, we propose a new IQA method, called Structural Contrast-Quality Index (SC-QI) by adopting a structural contrast index (SCI) which can well characterize local and global visual quality perceptions for various image characteristics with structural-distortion types. In addition to SCI, we devise some other perceptually important features for our SC-QI that can effectively reflect the characteristics of HVS for contrast sensitivity and chrominance component variation. Furthermore, we develop a modified SC-QI, called structural contrast distortion metric (SC-DM) which inherits desirable mathematical properties of valid distance metricability and quasi-convexity. So, it can effectively be used as a distance metric for image quality optimization problems. Extensive experimental results show that both SC-QI and SC-DM can very well characterize the HVS's properties of visual quality perception for local image characteristics and various distortion types, which is a distinctive merit of our methods compared to other IQA methods. As a result, both SC-QI and SC-DM have better performances with a strong consilience of global and local visual quality perception as well as with much lower computation complexity, compared to state-of-the-art IQA methods. The MATLAB source codes of the proposed SC-QI and SC-DM are publicly available online at https://sites.google.com/site/sunghobaecv/iqa.

  7. A Novel Image Quality Assessment With Globally and Locally Consilient Visual Quality Perception.

    PubMed

    Bae, Sung-Ho; Kim, Munchurl

    2016-05-01

    Computational models for image quality assessment (IQA) have been developed by exploring effective features that are consistent with the characteristics of a human visual system (HVS) for visual quality perception. In this paper, we first reveal that many existing features used in computational IQA methods can hardly characterize visual quality perception for local image characteristics and various distortion types. To solve this problem, we propose a new IQA method, called the structural contrast-quality index (SC-QI), by adopting a structural contrast index (SCI), which can well characterize local and global visual quality perceptions for various image characteristics with structural-distortion types. In addition to SCI, we devise some other perceptually important features for our SC-QI that can effectively reflect the characteristics of HVS for contrast sensitivity and chrominance component variation. Furthermore, we develop a modified SC-QI, called structural contrast distortion metric (SC-DM), which inherits desirable mathematical properties of valid distance metricability and quasi-convexity. So, it can effectively be used as a distance metric for image quality optimization problems. Extensive experimental results show that both SC-QI and SC-DM can very well characterize the HVS's properties of visual quality perception for local image characteristics and various distortion types, which is a distinctive merit of our methods compared with other IQA methods. As a result, both SC-QI and SC-DM have better performances with a strong consilience of global and local visual quality perception as well as with much lower computation complexity, compared with the state-of-the-art IQA methods. The MATLAB source codes of the proposed SC-QI and SC-DM are publicly available online at https://sites.google.com/site/sunghobaecv/iqa.

  8. LATIN AMERICAN IMAGE QUALITY SURVEY IN DIGITAL MAMMOGRAPHY STUDIES.

    PubMed

    Mora, Patricia; Khoury, Helen; Bitelli, Regina; Quintero, Ana Rosa; Garay, Fernando; Aguilar, Juan García; Gamarra, Mirtha; Ubeda, Carlos

    2016-03-23

    Under International Atomic Energy Agency regional programmeTSA3 Radiological Protection of Patients in Medical Exposures, Latin American countries evaluated the image quality and glandular doses for digital mammography equipment with the purpose of seeing the performance and compliance with international recommendations. Totally, 24 institutions participated from Brazil, Chile, Costa Rica, El Salvador, Mexico, Paraguay and Venezuela. Signal difference noise ratio results showed for CR poor compliance with tolerances; better results were obtained for full-field digital mammography equipment. Mean glandular dose results showed that the majority of units have values below the acceptable dose levels. This joint Latin American project identified common problems: difficulty in working with digital images and lack of specific training by medical physicists from the region. Image quality is a main issue not being satisfied in accordance with international recommendations; optimisation processes in which the doses are increased should be very carefully done in order to improve early detection of any cancer signs.

  9. Techniques to evaluate the quality of medical images

    NASA Astrophysics Data System (ADS)

    Perez-Diaz, Marlen

    2014-11-01

    There is not a perfect agree in the definition of medical image quality from the physician and physicist point of view. The present conference analyzes the standard techniques used to grade image quality. In the first place, an analysis about how viewing conditions related to environment, monitor used or physician experience determines the subjective evaluation is done. After that, the physics point of view is analyzed including the advantage and disadvantage of the main published methods like: Quality Control Tests, Mathematical metrics, Modulation Transfer Function, Noise Power Spectrum, System Response Curve and Mathematical observers. Each method is exemplified with the results of updated papers. We concluded that the most successful methods up to the present have been those which include simulations of the Human Visual System. They have good correlation between the results of the objective metrics and the subjective evaluation made by the observers.

  10. Automatic image quality assessment for uterine cervical imagery

    NASA Astrophysics Data System (ADS)

    Gu, Jia; Li, Wenjing

    2006-03-01

    Uterine cervical cancer is the second most common cancer among women worldwide. However, its death rate can be dramatically reduced by appropriate treatment, if early detection is available. We are developing a Computer-Aided-Diagnosis (CAD) system to facilitate colposcopic examinations for cervical cancer screening and diagnosis. Unfortunately, the effort to develop fully automated cervical cancer diagnostic algorithms is hindered by the paucity of high quality, standardized imaging data. The limited quality of cervical imagery can be attributed to several factors, including: incorrect instrumental settings or positioning, glint (specular reflection), blur due to poor focus, and physical contaminants. Glint eliminates the color information in affected pixels and can therefore introduce artifacts in feature extraction algorithms. Instrumental settings that result in an inadequate dynamic range or an overly constrained region of interest can reduce or eliminate pixel information and thus make image analysis algorithms unreliable. Poor focus causes image blur with a consequent loss of texture information. In addition, a variety of physical contaminants, such as blood, can obscure the desired scene and reduce or eliminate diagnostic information from affected areas. Thus, automated feedback should be provided to the colposcopist as a means to promote corrective actions. In this paper, we describe automated image quality assessment techniques, which include region of interest detection and assessment, contrast dynamic range assessment, blur detection, and contaminant detection. We have tested these algorithms using clinical colposcopic imagery, and plan to implement these algorithms in a CAD system designed to simplify high quality data acquisition. Moreover, these algorithms may also be suitable for image quality assessment in telemedicine applications.

  11. Full-Reference Image Quality Assessment with Linear Combination of Genetically Selected Quality Measures

    PubMed Central

    2016-01-01

    Information carried by an image can be distorted due to different image processing steps introduced by different electronic means of storage and communication. Therefore, development of algorithms which can automatically assess a quality of the image in a way that is consistent with human evaluation is important. In this paper, an approach to image quality assessment (IQA) is proposed in which the quality of a given image is evaluated jointly by several IQA approaches. At first, in order to obtain such joint models, an optimisation problem of IQA measures aggregation is defined, where a weighted sum of their outputs, i.e., objective scores, is used as the aggregation operator. Then, the weight of each measure is considered as a decision variable in a problem of minimisation of root mean square error between obtained objective scores and subjective scores. Subjective scores reflect ground-truth and involve evaluation of images by human observers. The optimisation problem is solved using a genetic algorithm, which also selects suitable measures used in aggregation. Obtained multimeasures are evaluated on four largest widely used image benchmarks and compared against state-of-the-art full-reference IQA approaches. Results of comparison reveal that the proposed approach outperforms other competing measures. PMID:27341493

  12. Improving Service Quality: Achieving High Performance in the Public and Private Sectors.

    ERIC Educational Resources Information Center

    Milakovich, Michael E.

    Quality-improvement principles are a sound means to respond to customer needs. However, when various quality and productivity theories and methods are applied, it is very difficult to consistently deliver quality results, especially in quasi-monopolistic, non-competitive, and regulated environments. This book focuses on quality-improvement methods…

  13. Process Dimensions of Child Care Quality and Academic Achievement: An Instrumental Variables Analysis

    ERIC Educational Resources Information Center

    Auger, Anamarie; Farkas, George; Duncan, Greg; Burchinal, Peg; Vandell, Deborah Lowe

    2012-01-01

    Child care quality is usually measured along two dimensions--structural and process. In this paper the authors focus on process quality--the quality of child care center instructional practices and teacher interactions with students. They use an instrumental variables technique to estimate the effect of child care center process quality on…

  14. A study of image quality for radar image processing. [synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.

    1982-01-01

    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.

  15. Evaluation of image quality of a new CCD-based system for chest imaging

    NASA Astrophysics Data System (ADS)

    Sund, Patrik; Kheddache, Susanne; Mansson, Lars G.; Bath, Magnus; Tylen, Ulf

    2000-04-01

    The Imix radiography system (Qy Imix Ab, Finland)consists of an intensifying screen, optics, and a CCD camera. An upgrade of this system (Imix 2000) with a red-emitting screen and new optics has recently been released. The image quality of Imix (original version), Imix 200, and two storage-phosphor systems, Fuji FCR 9501 and Agfa ADC70 was evaluated in physical terms (DQE) and with visual grading of the visibility of anatomical structures in clinical images (141 kV). PA chest images of 50 healthy volunteers were evaluated by experienced radiologists. All images were evaluated on Siemens Simomed monitors, using the European Quality Criteria. The maximum DQE values for Imix, Imix 2000, Agfa and Fuji were 11%, 14%, 17% and 19%, respectively (141kV, 5μGy). Using the visual grading, the observers rated the systems in the following descending order. Fuji, Imix 2000, Agfa, and Imix. Thus, the upgrade to Imix 2000 resulted in higher DQE values and a significant improvement in clinical image quality. The visual grading agrees reasonably well with the DQE results; however, Imix 2000 received a better score than what could be expected from the DQE measurements. Keywords: CCD Technique, Chest Imaging, Digital Radiography, DQE, Image Quality, Visual Grading Analysis

  16. Objective Quality Assessment and Perceptual Compression of Screen Content Images.

    PubMed

    Wang, Shiqi; Gu, Ke; Zeng, Kai; Wang, Zhou; Lin, Weisi

    2016-05-25

    Screen content image (SCI) has recently emerged as an active topic due to the rapidly increasing demand in many graphically rich services such as wireless displays and virtual desktops. Image quality models play an important role in measuring and optimizing user experience of SCI compression and transmission systems, but are currently lacking. SCIs are often composed of pictorial regions and computer generated textual/graphical content, which exhibit different statistical properties that often lead to different viewer behaviors. Inspired by this, we propose an objective quality assessment approach for SCIs that incorporates both visual field adaptation and information content weighting into structural similarity based local quality assessment. Furthermore, we develop a perceptual screen content coding scheme based on the newly proposed quality assessment measure, targeting at further improving the SCI compression performance. Experimental results show that the proposed quality assessment method not only better predicts the perceptual quality of SCIs, but also demonstrates great potentials in the design of perceptually optimal SCI compression schemes.

  17. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  18. Comparison of clinical and physical measures of image quality in chest and pelvis computed radiography at different tube voltages

    SciTech Connect

    Sandborg, Michael; Tingberg, Anders; Ullman, Gustaf; Dance, David R.; Alm Carlsson, Gudrun

    2006-11-15

    The aim of this work was to study the dependence of image quality in digital chest and pelvis radiography on tube voltage, and to explore correlations between clinical and physical measures of image quality. The effect on image quality of tube voltage in these two examinations was assessed using two methods. The first method relies on radiologists' observations of images of an anthropomorphic phantom, and the second method was based on computer modeling of the imaging system using an anthropomorphic voxel phantom. The tube voltage was varied within a broad range (50-150 kV), including those values typically used with screen-film radiography. The tube charge was altered so that the same effective dose was achieved for each projection. Two x-ray units were employed using a computed radiography (CR) image detector with standard tube filtration and antiscatter device. Clinical image quality was assessed by a group of radiologists using a visual grading analysis (VGA) technique based on the revised CEC image criteria. Physical image quality was derived from a Monte Carlo computer model in terms of the signal-to-noise ratio, SNR, of anatomical structures corresponding to the image criteria. Both the VGAS (visual grading analysis score) and SNR decrease with increasing tube voltage in both chest PA and pelvis AP examinations, indicating superior performance if lower tube voltages are employed. Hence, a positive correlation between clinical and physical measures of image quality was found. The pros and cons of using lower tube voltages with CR digital radiography than typically used in analog screen-film radiography are discussed, as well as the relevance of using VGAS and quantum-noise SNR as measures of image quality in pelvis and chest radiography.

  19. CT image quality over time: comparison of image quality for six different CT scanners over a six-year period.

    PubMed

    Roa, Ana Maria A; Andersen, Hilde K; Martinsen, Anne Catrine T

    2015-03-08

    UNSCEAR concluded that increased use of CT scanning caused dramatic changes in population dose. Therefore, international radiation protection authorities demand: 1) periodical quality assurance tests with respect to image quality and radiation dose, and 2) optimization of all examination protocols with respect to image quality and radiation dose. This study aimed to evaluate and analyze multiple image quality parameters and variability measured throughout time for six different CT scanners from four different vendors, in order to evaluate the current methodology for QA controls of CT systems. The results from this study indicate that there is minor drifting in the image noise and uniformity and in the spatial resolution over time for CT scanners, independent of vendors. The HU for different object densities vary between different CT scanner models from different vendors, and over time for one specific CT scanner. Future tests of interphantom and intraphantom variations, along with inclusion of more CT scanners, are necessary to establish robust baselines and recommendations of methodology for QA controls of CT systems, independent of model and vendor.

  20. A quality assurance program for the on-board imagers.

    PubMed

    Yoo, Sua; Kim, Gwe-Ya; Hammoud, Rabih; Elder, Eric; Pawlicki, Todd; Guan, Huaiqun; Fox, Timothy; Luxton, Gary; Yin, Fang-Fang; Munro, Peter

    2006-11-01

    To develop a quality assurance (QA) program for the On-Board Imager (OBI) system and to summarize the results of these QA tests over extended periods from multiple institutions. Both the radiographic and cone-beam computed tomography (CBCT) mode of operation have been evaluated. The QA programs from four institutions have been combined to generate a series of tests for evaluating the performance of the On-Board Imager. The combined QA program consists of three parts: (1) safety and functionality, (2) geometry, and (3) image quality. Safety and functionality tests evaluate the functionality of safety features and the clinical operation of the entire system during the tube warm-up. Geometry QA verifies the geometric accuracy and stability of the OBI/CBCT hardware/software. Image quality QA monitors spatial resolution and contrast sensitivity of the radiographic images. Image quality QA for CBCT includes tests for Hounsfield Unit (HU) linearity, HU uniformity, spatial linearity, and scan slice geometry, in addition. All safety and functionality tests passed on a daily basis. The average accuracy of the OBI isocenter was better than 1.5 mm with a range of variation of less than 1 mm over 8 months. The average accuracy of arm positions in the mechanical geometry QA was better than 1 mm, with a range of variation of less than 1 mm over 8 months. Measurements of other geometry QA tests showed stable results within tolerance throughout the test periods. Radiographic contrast sensitivity ranged between 2.2% and 3.2% and spatial resolution ranged between 1.25 and 1.6 lp/mm. Over four months the CBCT images showed stable spatial linearity, scan slice geometry, contrast resolution (1%; <7 mm disk) and spatial resolution (>6 lp/cm). The HU linearity was within +/-40 HU for all measurements. By combining test methods from multiple institutions, we have developed a comprehensive, yet practical, set of QA tests for the OBI system. Use of the tests over extended periods show that

  1. Scanner-based image quality measurement system for automated analysis of EP output

    NASA Astrophysics Data System (ADS)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  2. Image quality and dose assessment in digital breast tomosynthesis: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Baptista, M.; Di Maria, S.; Oliveira, N.; Matela, N.; Janeiro, L.; Almeida, P.; Vaz, P.

    2014-11-01

    Mammography is considered a standard technique for the early detection of breast cancer. However, its sensitivity is limited essentially due to the issue of the overlapping breast tissue. This limitation can be partially overcome, with a relatively new technique, called digital breast tomosynthesis (DBT). For this technique, optimization of acquisition parameters which maximize image quality, whilst complying with the ALARA principle, continues to be an area of considerable research. The aim of this work was to study the best quantum energies that optimize the image quality with the lowest achievable dose in DBT and compare these results with the digital mammography (DM) ones. Monte Carlo simulations were performed using the state-of-the-art computer program MCNPX 2.7.0 in order to generate several 2D cranio-caudal (CC) projections obtained during an acquisition of a standard DBT examination. Moreover, glandular absorbed doses and photon flux calculations, for each projection image, were performed. A homogeneous breast computational phantom with 50%/50% glandular/adipose tissue composition was used and two compressed breast thicknesses were evaluated: 4 cm and 8 cm. The simulated projection images were afterwards reconstructed with an algebraic reconstruction tool and the signal difference to noise ratio (SDNR) was calculated in order to evaluate the image quality in DBT and DM. Finally, a thorough comparison between the results obtained in terms of SDNR and dose assessment in DBT and DM was performed.

  3. An improved Gabor enhancement method for low-quality fingerprint images

    NASA Astrophysics Data System (ADS)

    Geng, Hao; Li, Jicheng; Zhou, Jinwei; Chen, Dong

    2015-10-01

    The criminal's fingerprints often refer to those fingerprints that are extracted from crime scene and have played an important role in police' investigation and cracking the cases, but these fingerprints have features such as blur, incompleteness and low-contrast of ridges. Traditional fingerprint enhancement and identification methods have some limitations and the current automated fingerprint identification system (AFIS) hasn't not been applied extensively in police' investigation. Since the Gabor filter has drawbacks such as poor efficiency, low preciseness of the extracted ridge's orientation parameters, the enhancements of low-contrast fingerprint images can't achieve the desired effects. Therefore, an improved Gabor enhancement for low-quality fingerprint is proposed in this paper. Firstly, orientation image templates with different scales were used to distinguish the orientation images in the fingerprint area, and then orientation parameters of ridge were calculated. Secondly, mean frequencies of ridge were extracted based on local window of ridge's orientation and mean frequency parameters of ridges were calculated. Thirdly, the size and orientation of Gabor filter were self-adjusted according to local ridge's orientation and mean frequency. Finally, the poor-quality fingerprint images were enhanced. In the experiment, the improved Gabor filter has better performance for low-quality fingerprint images when compared with the traditional filtering methods.

  4. Dosimetry and image quality in digital mammography facilities in the State of Minas Gerais, Brazil

    NASA Astrophysics Data System (ADS)

    da Silva, Sabrina Donato; Joana, Geórgia Santos; Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Leyton, Fernando; Nogueira, Maria do Socorro

    2015-11-01

    According to the National Register of Health Care Facilities (CNES), there are approximately 477 mammography systems operating in the state of Minas Gerais, Brazil, of which an estimated 200 are digital apparatus using mainly computerized radiography (CR) or direct radiography (DR) systems. Mammography is irreplaceable in the diagnosis and early detection of breast cancer, the leading cause of cancer death among women worldwide. A high standard of image quality alongside smaller doses and optimization of procedures are essential if early detection is to occur. This study aimed to determine dosimetry and image quality in 68 mammography services in Minas Gerais using CR or DR systems. The data of this study were collected between the years of 2011 and 2013. The contrast-to-noise ratio proved to be a critical point in the image production chain in digital systems, since 90% of services were not compliant in this regard, mainly for larger PMMA thicknesses (60 and 70 mm). Regarding the image noise, only 31% of these were compliant. The average glandular dose found is of concern, since more than half of the services presented doses above acceptable limits. Therefore, despite the potential benefits of using CR and DR systems, the employment of this technology has to be revised and optimized to achieve better quality image and reduce radiation dose as much as possible.

  5. The impact of spectral filtration on image quality in micro-CT system.

    PubMed

    Ren, Liqiang; Ghani, Muhammad U; Wu, Di; Zheng, Bin; Chen, Yong; Yang, Kai; Wu, Xizeng; Liu, Hong

    2016-01-01

    This paper aims to evaluate the impact of spectral filtration on image quality in a microcomputed tomography (micro-CT) system. A mouse phantom comprising 11 rods for modeling lung, muscle, adipose, and bones was scanned with 17 s and 2 min, respectively. The current (μA) for each scan was adjusted to achieve identical entrance exposure to the phantom, providing a baseline for image quality evaluation. For each region of interest (ROI) within specific composition, CT number variations, noise levels, and contrast-to-noise ratios (CNRs) were evaluated from the reconstructed images. CT number variations and CNRs for bone with high density, muscle, and adipose were compared with theoretical predictions. The results show that the impact of spectral filtration on image quality indicators, such as CNR in a micro-CT system, is significantly associated with tissue characteristics. The findings may provide useful references for optimizing the scanning parameters of general micro-CT systems in future imaging applications. PACS numbers: 87.57.C-, 87.57.Q-, 87.64.kd.

  6. Effects of task and image properties on visual-attention deployment in image-quality assessment

    NASA Astrophysics Data System (ADS)

    Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid

    2015-03-01

    It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.

  7. Reduced reference image quality assessment via sub-image similarity based redundancy measurement

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Xue, Wufeng; Zhang, Lei

    2012-03-01

    The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.

  8. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Rhodes, Laura A.; Davies, Andrew G.

    2015-03-01

    Dynamic X-ray imaging systems are used for interventional cardiac procedures to treat coronary heart disease. X-ray settings are controlled automatically by specially-designed X-ray dose control mechanisms whose role is to ensure an adequate level of image quality is maintained with an acceptable radiation dose to the patient. Current commonplace dose control designs quantify image quality by performing a simple technical measurement directly from the image. However, the utility of cardiac X-ray images is in their interpretation by a cardiologist during an interventional procedure, rather than in a technical measurement. With the long term goal of devising a clinically-relevant image quality metric for an intelligent dose control system, we aim to investigate the relationship of image noise with clinical professionals' perception of dynamic image sequences. Computer-generated noise was added, in incremental amounts, to angiograms of five different patients selected to represent the range of adult cardiac patient sizes. A two alternative forced choice staircase experiment was used to determine the amount of noise which can be added to a patient image sequences without changing image quality as perceived by clinical professionals. Twenty-five viewing sessions (five for each patient) were completed by thirteen observers. Results demonstrated scope to increase the noise of cardiac X-ray images by up to 21% +/- 8% before it is noticeable by clinical professionals. This indicates a potential for 21% radiation dose reduction since X-ray image noise and radiation dose are directly related; this would be beneficial to both patients and personnel.

  9. Improvement of material decomposition and image quality in dual-energy radiography by reducing image noise

    NASA Astrophysics Data System (ADS)

    Lee, D.; Kim, Y.-s.; Choi, S.; Lee, H.; Choi, S.; Jo, B. D.; Jeon, P.-H.; Kim, H.; Kim, D.; Kim, H.; Kim, H.-J.

    2016-08-01

    Although digital radiography has been widely used for screening human anatomical structures in clinical situations, it has several limitations due to anatomical overlapping. To resolve this problem, dual-energy imaging techniques, which provide a method for decomposing overlying anatomical structures, have been suggested as alternative imaging techniques. Previous studies have reported several dual-energy techniques, each resulting in different image qualities. In this study, we compared three dual-energy techniques: simple log subtraction (SLS), simple smoothing of a high-energy image (SSH), and anti-correlated noise reduction (ACNR) with respect to material thickness quantification and image quality. To evaluate dual-energy radiography, we conducted Monte Carlo simulation and experimental phantom studies. The Geant 4 Application for Tomographic Emission (GATE) v 6.0 and tungsten anode spectral model using interpolation polynomials (TASMIP) codes were used for simulation studies and digital radiography, and human chest phantoms were used for experimental studies. The results of the simulation study showed improved image contrast-to-noise ratio (CNR) and coefficient of variation (COV) values and bone thickness estimation accuracy by applying the ACNR and SSH methods. Furthermore, the chest phantom images showed better image quality with the SSH and ACNR methods compared to the SLS method. In particular, the bone texture characteristics were well-described by applying the SSH and ACNR methods. In conclusion, the SSH and ACNR methods improved the accuracy of material quantification and image quality in dual-energy radiography compared to SLS. Our results can contribute to better diagnostic capabilities of dual-energy images and accurate material quantification in various clinical situations.

  10. Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.

    PubMed

    Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong

    2016-04-01

    Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.

  11. COATLI: an all-sky robotic optical imager with 0.3 arcsec image quality

    NASA Astrophysics Data System (ADS)

    Watson, Alan M.; Cuevas Cardona, Salvador; Alvarez Nuñez, Luis C.; Ángeles, Fernando; Becerra-Godínez, Rosa L.; Chapa, Oscar; Farah, Alejandro S.; Fuentes-Fernández, Jorge; Figueroa, Liliana; Langarica Lebre, Rosalía.; Quiróz, Fernando; Román-Zúñiga, Carlos G.; Ruíz-Diáz-Soto, Jaime; Tejada, Carlos; Tinoco, Silvio J.

    2016-08-01

    COATLI will provide 0.3 arcsec FWHM images from 550 to 900 nm over a large fraction of the sky. It consists of a robotic 50-cm telescope with a diffraction-limited fast-guiding imager. Since the telescope is small, fast guiding will provide diffraction-limited image quality over a field of at least 1 arcmin and with coverage of a large fraction of the sky, even in relatively poor seeing. The COATLI telescope will be installed at the at the Observatorio Astronómico Nacional in Sierra San Pedro Mártir, México, during 2016 and the diffraction-limited imager will follow in 2017.

  12. No-reference image quality assessment in the spatial domain.

    PubMed

    Mittal, Anish; Moorthy, Anush Krishna; Bovik, Alan Conrad

    2012-12-01

    We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of "naturalness" in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation.

  13. Metal artifact reduction and image quality evaluation of lumbar spine CT images using metal sinogram segmentation.

    PubMed

    Kaewlek, Titipong; Koolpiruck, Diew; Thongvigitmanee, Saowapak; Mongkolsuk, Manus; Thammakittiphan, Sastrawut; Tritrakarn, Siri-on; Chiewvit, Pipat

    2015-01-01

    Metal artifacts often appear in the images of computed tomography (CT) imaging. In the case of lumbar spine CT images, artifacts disturb the images of critical organs. These artifacts can affect the diagnosis, treatment, and follow up care of the patient. One approach to metal artifact reduction is the sinogram completion method. A mixed-variable thresholding (MixVT) technique to identify the suitable metal sinogram is proposed. This technique consists of four steps: 1) identify the metal objects in the image by using k-mean clustering with the soft cluster assignment, 2) transform the image by separating it into two sinograms, one of which is the sinogram of the metal object, with the surrounding tissue shown in the second sinogram. The boundary of the metal sinogram is then found by the MixVT technique, 3) estimate the new value of the missing data in the metal sinogram by linear interpolation from the surrounding tissue sinogram, 4) reconstruct a modified sinogram by using filtered back-projection and complete the image by adding back the image of the metal object into the reconstructed image to form the complete image. The quantitative and clinical image quality evaluation of our proposed technique demonstrated a significant improvement in image clarity and detail, which enhances the effectiveness of diagnosis and treatment.

  14. 2011 John M. Eisenberg Patient Safety and Quality Awards. Individual Achievement. Interview by Eric J. Thomas.

    PubMed

    Shine, Kenneth

    2012-07-01

    Dr. Shine, who, as president, led the Institute of Medicine's focus on quality and patient safety, describes initiatives at the University of Texas System, including quality improvement training, systems engineering, assessment of projects' economic impact, and dissemination of good practices.

  15. Clinical study in phase- contrast mammography: image-quality analysis.

    PubMed

    Longo, Renata; Tonutti, Maura; Rigon, Luigi; Arfelli, Fulvia; Dreossi, Diego; Quai, Elisa; Zanconati, Fabrizio; Castelli, Edoardo; Tromba, Giuliana; Cova, Maria A

    2014-03-06

    The first clinical study of phase-contrast mammography (PCM) with synchrotron radiation was carried out at the Synchrotron Radiation for Medical Physics beamline of the Elettra synchrotron radiation facility in Trieste (Italy) in 2006-2009. The study involved 71 patients with unresolved breast abnormalities after conventional digital mammography and ultrasonography exams carried out at the Radiology Department of Trieste University Hospital. These cases were referred for mammography at the synchrotron radiation facility, with images acquired using a propagation-based phase-contrast imaging technique. To investigate the contribution of phase-contrast effects to the image quality, two experienced radiologists specialized in mammography assessed the visibility of breast abnormalities and of breast glandular structures. The images acquired at the hospital and at the synchrotron radiation facility were compared and graded according to a relative seven-grade visual scoring system. The statistical analysis highlighted that PCM with synchrotron radiation depicts normal structures and abnormal findings with higher image quality with respect to conventional digital mammography.

  16. Quality assessment of butter cookies applying multispectral imaging.

    PubMed

    Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne

    2013-07-01

    A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4-16 min and 160-200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400-700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center.

  17. Study on classification of pork quality using hyperspectral imaging technique

    NASA Astrophysics Data System (ADS)

    Zeng, Shan; Bai, Jun; Wang, Haibin

    2015-12-01

    The relative problems' research of chilled meat, thawed meat and spoiled meat discrimination by hyperspectral image technique were proposed, such the section of feature wavelengths, et al. First, based on 400 ~ 1000nm range hyperspectral image data of testing pork samples, by K-medoids clustering algorithm based on manifold distance, we select 30 important wavelengths from 753 wavelengths, and thus select 8 feature wavelengths (454.4, 477.5, 529.3, 546.8, 568.4, 580.3, 589.9 and 781.2nm) based on the discrimination value. Then 8 texture features of each image under 8 feature wavelengths were respectively extracted by two-dimensional Gabor wavelets transform as pork quality feature. Finally, we build a pork quality classification model using the fuzzy C-mean clustering algorithm. Through the experiment of extracting feature wavelengths, we found that although the hyperspectral images between adjacent bands have a strong linear correlation, they show a significant non-linear manifold relationship from the entire band. K-medoids clustering algorithm based on manifold distance used in this paper for selecting the characteristic wavelengths, which is more reasonable than traditional principal component analysis (PCA). Through the classification result, we conclude that hyperspectral imaging technology can distinguish among chilled meat, thawed meat and spoiled meat accurately.

  18. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  19. Comparison-based Image Quality Assessment for Selecting Image Restoration Parameters.

    PubMed

    Liang, Haoyi; Weller, Daniel

    2016-08-19

    Image quality assessment (IQA) is traditionally classified into full-reference (FR) IQA, reduced-reference (RR) IQA, and no-reference (NR) IQA according to the amount of information required from the original image. Although NRIQA and RR-IQA are widely used in practical applications, room for improvement still remains because of the lack of the reference image. Inspired by the fact that in many applications, such as parameter selection for image restoration algorithms, a series of distorted images are available, the authors propose a novel comparison-based image quality assessment (C-IQA) framework. The new comparison-based framework parallels FRIQA by requiring two input images, and resembles NR-IQA by not using the original image. As a result, the new comparisonbased approach has more application scenarios than FR-IQA does, and takes greater advantage of the accessible information than the traditional single-input NR-IQA does. Further, C-IQA is compared with other state-of-the-art NR-IQA methods and another RR-IQA method on two widely used IQA databases. Experimental results show that C-IQA outperforms the other methods for parameter selection, and the parameter trimming framework combined with C-IQA saves the computation of iterative image reconstruction up to 80%.

  20. Characterization of image quality and image-guidance performance of a preclinical microirradiator

    SciTech Connect

    Clarkson, R.; Lindsay, P. E.; Ansell, S.; Wilson, G.; Jelveh, S.; Hill, R. P.; Jaffray, D. A.

    2011-02-15

    Purpose: To assess image quality and image-guidance capabilities of a cone-beam CT based small-animal image-guided irradiation unit (micro-IGRT). Methods: A micro-IGRT system has been developed in collaboration with the authors' laboratory as a means to study the radiobiological effects of conformal radiation dose distributions in small animals. The system, the X-Rad 225Cx, consists of a 225 kVp x-ray tube and a flat-panel amorphous silicon detector mounted on a rotational C-arm gantry and is capable of both fluoroscopic x-ray and cone-beam CT imaging, as well as image-guided placement of the radiation beams. Image quality (voxel noise, modulation transfer, CT number accuracy, and geometric accuracy characteristics) was assessed using water cylinder and micro-CT test phantoms. Image guidance was tested by analyzing the dose delivered to radiochromic films fixed to BB's through the end-to-end process of imaging, targeting the center of the BB, and irradiation of the film/BB in order to compare the offset between the center of the field and the center of the BB. Image quality and geometric studies were repeated over a 5-7 month period to assess stability. Results: CT numbers reported were found to be linear (R{sup 2}{>=}0.998) and the noise for images of homogeneous water phantom was 30 HU at imaging doses of approximately 1 cGy (to water). The presampled MTF at 50% and 10% reached 0.64 and 1.35 mm{sup -1}, respectively. Targeting accuracy by means of film irradiations was shown to have a mean displacement error of [{Delta}x,{Delta}y,{Delta}z]=[-0.12,-0.05,-0.02] mm, with standard deviations of [0.02, 0.20, 0.17] mm. The system has proven to be stable over time, with both the image quality and image-guidance performance being reproducible for the duration of the studies. Conclusions: The micro-IGRT unit provides soft-tissue imaging of small-animal anatomy at acceptable imaging doses ({<=}1 cGy). The geometric accuracy and targeting systems permit dose placement with

  1. DES exposure checker: Dark Energy Survey image quality control crowdsourcer

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Sheldon, Erin; Drlica-Wagner, Alex; Rykoff, Eli S.

    2015-11-01

    DES exposure checker renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes, thus allowing image quality control for the Dark Energy Survey to be crowdsourced through its web application. Users can also generate custom labels to help identify previously unknown problem classes; generated reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. These problem reports allow rapid correction of artifacts that otherwise may be too subtle or infrequent to be recognized.

  2. A monthly quality assurance procedure for 3D surface imaging.

    PubMed

    Wooten, H Omar; Klein, Eric E; Gokhroo, Garima; Santanam, Lakshmi

    2010-12-21

    A procedure for periodic quality assurance of a video surface imaging system is introduced. AlignRT is a video camera-based patient localization system that captures and compares images of a patient's topography to a DICOM-formatted external contour, then calculates shifts required to accurately reposition the patient. This technical note describes the tools and methods implemented in our department to verify correct and accurate operation of the AlignRT hardware and software components. The procedure described is performed monthly and complements a daily calibration of the system.

  3. Video Snapshots: Creating High-Quality Images from Video Clips.

    PubMed

    Sunkavalli, Kalyan; Joshi, Neel; Kang, Sing Bing; Cohen, Michael F; Pfister, Hanspeter

    2012-11-01

    We describe a unified framework for generating a single high-quality still image ("snapshot") from a short video clip. Our system allows the user to specify the desired operations for creating the output image, such as super resolution, noise and blur reduction, and selection of best focus. It also provides a visual summary of activity in the video by incorporating saliency-based objectives in the snapshot formation process. We show examples on a number of different video clips to illustrate the utility and flexibility of our system.

  4. Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography.

    PubMed

    Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar

    2016-08-01

    Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.

  5. Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base

    PubMed Central

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146

  6. Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images

    NASA Astrophysics Data System (ADS)

    Farnand, Susan; Jiang, Jun; Frey, Franziska

    2011-01-01

    A project, supported by the Andrew W. Mellon Foundation, is currently underway to evaluate current practices in fine art image reproduction, determine the image quality generally achievable, and establish a suggested framework for art image interchange. To determine the image quality currently being achieved, experimentation has been conducted in which a set of objective targets and pieces of artwork in various media were imaged by participating museums and other cultural heritage institutions. Prints and images for display made from the delivered image files at the Rochester Institute of Technology were used as stimuli in psychometric testing in which observers were asked to evaluate the prints as reproductions of the original artwork and as stand alone images. The results indicated that there were limited differences between assessments made using displayed images relative to printed reproductions. Further, the differences between rankings made with and without the original artwork present were much smaller than expected.

  7. Assessing image quality and dose reduction of a new x-ray computed tomography iterative reconstruction algorithm using model observers

    SciTech Connect

    Tseng, Hsin-Wu Kupinski, Matthew A.; Fan, Jiahua; Sainath, Paavana; Hsieh, Jiang

    2014-07-15

    Purpose: A number of different techniques have been developed to reduce radiation dose in x-ray computed tomography (CT) imaging. In this paper, the authors will compare task-based measures of image quality of CT images reconstructed by two algorithms: conventional filtered back projection (FBP), and a new iterative reconstruction algorithm (IR). Methods: To assess image quality, the authors used the performance of a channelized Hotelling observer acting on reconstructed image slices. The selected channels are dense difference Gaussian channels (DDOG).A body phantom and a head phantom were imaged 50 times at different dose levels to obtain the data needed to assess image quality. The phantoms consisted of uniform backgrounds with low contrast signals embedded at various locations. The tasks the observer model performed included (1) detection of a signal of known location and shape, and (2) detection and localization of a signal of known shape. The employed DDOG channels are based on the response of the human visual system. Performance was assessed using the areas under ROC curves and areas under localization ROC curves. Results: For signal known exactly (SKE) and location unknown/signal shape known tasks with circular signals of different sizes and contrasts, the authors’ task-based measures showed that a FBP equivalent image quality can be achieved at lower dose levels using the IR algorithm. For the SKE case, the range of dose reduction is 50%–67% (head phantom) and 68%–82% (body phantom). For the study of location unknown/signal shape known, the dose reduction range can be reached at 67%–75% for head phantom and 67%–77% for body phantom case. These results suggest that the IR images at lower dose settings can reach the same image quality when compared to full dose conventional FBP images. Conclusions: The work presented provides an objective way to quantitatively assess the image quality of a newly introduced CT IR algorithm. The performance of the

  8. Head Start Program Quality: Examination of Classroom Quality and Parent Involvement in Predicting Children's Vocabulary, Literacy, and Mathematics Achievement Trajectories

    ERIC Educational Resources Information Center

    Wen, Xiaoli; Bulotsky-Shearer, Rebecca J.; Hahs-Vaughn, Debbie L.; Korfmacher, Jon

    2012-01-01

    Guided by a developmental-ecological framework and Head Start's two-generational approach, this study examined two dimensions of Head Start program quality, classroom quality and parent involvement and their unique and interactive contribution to children's vocabulary, literacy, and mathematics skills growth from the beginning of Head Start…

  9. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    NASA Astrophysics Data System (ADS)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  10. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    PubMed Central

    Xu, Shiyu; Lu, Jianping; Zhou, Otto; Chen, Ying

    2015-01-01

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair based prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications. PMID:26328987

  11. A quality assurance program for image quality of cone-beam CT guidance in radiation therapy

    SciTech Connect

    Bissonnette, Jean-Pierre; Moseley, Douglas J.; Jaffray, David A.

    2008-05-15

    The clinical introduction of volumetric x-ray image-guided radiotherapy systems necessitates formal commissioning of the hardware and image-guided processes to be used and drafts quality assurance (QA) for both hardware and processes. Satisfying both requirements provides confidence on the system's ability to manage geometric variations in patient setup and internal organ motion. As these systems become a routine clinical modality, the authors present data from their QA program tracking the image quality performance of ten volumetric systems over a period of 3 years. These data are subsequently used to establish evidence-based tolerances for a QA program. The volumetric imaging systems used in this work combines a linear accelerator with conventional x-ray tube and an amorphous silicon flat-panel detector mounted orthogonally from the accelerator central beam axis, in a cone-beam computed tomography (CBCT) configuration. In the spirit of the AAPM Report No. 74, the present work presents the image quality portion of their QA program; the aspects of the QA protocol addressing imaging geometry have been presented elsewhere. Specifically, the authors are presenting data demonstrating the high linearity of CT numbers, the uniformity of axial reconstructions, and the high contrast spatial resolution of ten CBCT systems (1-2 mm) from two commercial vendors. They are also presenting data accumulated over the period of several months demonstrating the long-term stability of the flat-panel detector and of the distances measured on reconstructed volumetric images. Their tests demonstrate that each specific CBCT system has unique performance. In addition, scattered x rays are shown to influence the imaging performance in terms of spatial resolution, axial reconstruction uniformity, and the linearity of CT numbers.

  12. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  13. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  14. On image quality metrics and the usefulness of grids in digital mammography

    PubMed Central

    Chen, Han; Danielsson, Mats; Xu, Cheng; Cederström, Björn

    2015-01-01

    Abstract. Antiscatter grids are used in digital mammography to reduce the scattered radiation from the breast and improve image contrast. They are, however, imperfect and lead to partial absorption of primary radiation, as well as failing to absorb all scattered radiation. Nevertheless, the general consensus has been that antiscatter grids improve image quality for the majority of breast types and sizes. There is, however, inconsistency in the literature, and recent results show that a substantial image quality improvement can be achieved even for thick breasts if the grid is disposed of. The purpose of this study was to investigate if differences in the considered imaging task and experimental setup could explain the different outcomes. We estimated the dose reduction that can be achieved if the grid were to be removed as a function of breast thickness with varying geometries and experimental conditions. Image quality was quantified by the signal-difference-to-noise ratio (SDNR) measured using an aluminum (Al) filter on blocks of poly(methyl methacrylate) (PMMA), and images were acquired with and without grid at a constant exposure. We also used a theoretical model validated with Monte Carlo simulations. Both theoretically and experimentally, the main finding was that when a large 4×8  cm2 Al filter was used, the SDNR values for the gridless images were overestimated up to 25% compared to the values for the small 1×1  cm2 filter, and gridless imaging was superior for any PMMA thickness. For the small Al filter, gridless imaging was only superior for PMMAs thinner than 4 cm. This discrepancy can be explained by a different sensitivity to and sampling of the angular scatter spread function, depending on the size of the contrast object. The experimental differences were eliminated either by using a smaller region of interest close to the edge of the large filter or by applying a technique of scatter correction by subtracting the estimated scatter image

  15. Effects of characteristics of image quality in an immersive environment

    NASA Technical Reports Server (NTRS)

    Duh, Henry Been-Lirn; Lin, James J W.; Kenyon, Robert V.; Parker, Donald E.; Furness, Thomas A.

    2002-01-01

    Image quality issues such as field of view (FOV) and resolution are important for evaluating "presence" and simulator sickness (SS) in virtual environments (VEs). This research examined effects on postural stability of varying FOV, image resolution, and scene content in an immersive visual display. Two different scenes (a photograph of a fountain and a simple radial pattern) at two different resolutions were tested using six FOVs (30, 60, 90, 120, 150, and 180 deg.). Both postural stability, recorded by force plates, and subjective difficulty ratings varied as a function of FOV, scene content, and image resolution. Subjects exhibited more balance disturbance and reported more difficulty in maintaining posture in the wide-FOV, high-resolution, and natural scene conditions.

  16. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  17. Objective assessment of image quality. IV. Application to adaptive optics.

    PubMed

    Barrett, Harrison H; Myers, Kyle J; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  18. Objective assessment of image quality. IV. Application to adaptive optics

    NASA Astrophysics Data System (ADS)

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2006-12-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed.

  19. Adaptive photoacoustic imaging quality optimization with EMD and reconstruction

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.

    2016-10-01

    Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.

  20. Image gathering and digital restoration for fidelity and visual quality

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1991-01-01

    The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.

  1. Achieving the Health Care Financing Administration limits by quality improvement and quality control. A real-world example.

    PubMed

    Engebretson, M J; Cembrowski, G S

    1992-07-01

    With the enactment of the Clinical Laboratory Improvement Amendments of 1988 (CLIA 88), the federal government is now using proficiency testing as the primary indicator of laboratory quality. Laboratories with proficiency test failures are now at risk of a variety of harsh penalties including large monetary fines and suspension of operations. To minimize the risk of failed proficiency testing, we initiated a continuous quality improvement program in our general chemistry laboratory in conjunction with the use of a new survey-validated quality control product. This article describes the quality improvement program and our success in reducing the long-term random error in general chemistry. Despite our improvement program, significant analytical errors (greater than 30% of the CLIA limits) still exist in analytes measured by our chemistry analyzer. These errors are present in nearly the same analytes measured by other common chemistry analyzers indicating the need for improvement in their design and manufacture.

  2. Dual-energy cardiac imaging: an image quality and dose comparison for a flat-panel detector and x-ray image intensifier

    NASA Astrophysics Data System (ADS)

    Ducote, Justin L.; Xu, Tong; Molloi, Sabee

    2007-01-01

    This study presents a comparison of dual-energy imaging with an x-ray image intensifier and flat-panel detector for cardiac imaging. It also investigates if the wide dynamic range of the flat-panel detector can improve dual-energy image quality while reducing patient dose. Experimental contrast-to-noise (CNR) measurements were carried out in addition to simulation studies. Patient entrance exposure and system tube loading were also recorded. The studied contrast objects were calcium and iodine. System performance was quantified with a figure-of-merit (FOM) defined as the image CNR2 over patient entrance exposure. The range of thickness studied was from 10 to 30 cm of Lucite (PMMA). Detector dose was initially set to 140 nGy (16 µR)/frame. The high-energy 120 kVp beam was filtered by an additional 0.8 mm silver filter. Keeping the same filament current, the kVp for the low-energy beam was adjusted as a function of thickness until 140 nGy was achieved. System performance was found to be similar for both systems, with the x-ray image intensifier performing better at lower thicknesses and the flat-panel detector performing better at higher thicknesses. This requirement of fixed detector entrance exposure was then relaxed and the kVp for the low-energy beam was allowed to vary while the mAs of the x-ray tube remained fixed to study changes in dual-energy image quality, patient dose and FOM with the flat-panel detector. It was found that as the kVp for the low-energy beam was reduced, system performance would rise until reaching a maximum while simultaneously lowering patient exposure. Suggested recommendations for optimal dual-energy imaging implementation are also provided.

  3. Dual-energy cardiac imaging: an image quality and dose comparison for a flat-panel detector and x-ray image intensifier.

    PubMed

    Ducote, Justin L; Xu, Tong; Molloi, Sabee

    2007-01-07

    This study presents a comparison of dual-energy imaging with an x-ray image intensifier and flat-panel detector for cardiac imaging. It also investigates if the wide dynamic range of the flat-panel detector can improve dual-energy image quality while reducing patient dose. Experimental contrast-to-noise (CNR) measurements were carried out in addition to simulation studies. Patient entrance exposure and system tube loading were also recorded. The studied contrast objects were calcium and iodine. System performance was quantified with a figure-of-merit (FOM) defined as the image CNR(2) over patient entrance exposure. The range of thickness studied was from 10 to 30 cm of Lucite (PMMA). Detector dose was initially set to 140 nGy (16 microR)/frame. The high-energy 120 kVp beam was filtered by an additional 0.8 mm silver filter. Keeping the same filament current, the kVp for the low-energy beam was adjusted as a function of thickness until 140 nGy was achieved. System performance was found to be similar for both systems, with the x-ray image intensifier performing better at lower thicknesses and the flat-panel detector performing better at higher thicknesses. This requirement of fixed detector entrance exposure was then relaxed and the kVp for the low-energy beam was allowed to vary while the mAs of the x-ray tube remained fixed to study changes in dual-energy image quality, patient dose and FOM with the flat-panel detector. It was found that as the kVp for the low-energy beam was reduced, system performance would rise until reaching a maximum while simultaneously lowering patient exposure. Suggested recommendations for optimal dual-energy imaging implementation are also provided.

  4. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  5. Homework Works If Homework Quality Is High: Using Multilevel Modeling to Predict the Development of Achievement in Mathematics

    ERIC Educational Resources Information Center

    Dettmers, Swantje; Trautwein, Ulrich; Ludtke, Oliver; Kunter, Mareike; Baumert, Jurgen

    2010-01-01

    The present study examined the associations of 2 indicators of homework quality (homework selection and homework challenge) with homework motivation, homework behavior, and mathematics achievement. Multilevel modeling was used to analyze longitudinal data from a representative national sample of 3,483 students in Grades 9 and 10; homework effects…

  6. The Perception of Preservice Mathematics Teachers on the Role of Scaffolding in Achieving Quality Mathematics Classroom Instruction

    ERIC Educational Resources Information Center

    Bature, Iliya Joseph; Jibrin, Adamu Gagdi

    2015-01-01

    This paper was designed to investigate the perceptions of four preservice mathematics teachers on the role of scaffolding in supporting and assisting them achieves quality classroom teaching. A collaborative approach to teaching through a community of practice was used to obtain data for the three research objectives that were postulated. Two…

  7. Mathematics Achievement among Secondary Students in Relation to Enrollment/Nonenrollment in Music Programs of Differing Content or Quality

    ERIC Educational Resources Information Center

    Van der Vossen, Maria R.

    2012-01-01

    This causal-comparative study examined the relationship between enrollment/non-enrollment in music programs of differing content or quality and mathematical achievement among 739 secondary (grades 8-12) students from four different Maryland counties. The students, both female and male, were divided into sample groups by their participation in a…

  8. Training Needs for Faculty Members: Towards Achieving Quality of University Education in the Light of Technological Innovations

    ERIC Educational Resources Information Center

    Abouelenein, Yousri Attia Mohamed

    2016-01-01

    The aim of this study was to identify training needs of university faculty members, in order to achieve the desired quality in the light of technological innovations. A list of training needs of faculty members was developed in terms of technological innovations in general, developing skills of faculty members in the use of technological…

  9. A Multilevel Analysis of the Role of School Quality and Family Background on Students' Mathematics Achievement in the Middle East

    ERIC Educational Resources Information Center

    Kareshki, Hossein; Hajinezhad, Zahra

    2014-01-01

    The purpose of the present study is investigating the correlation between school quality and family socioeconomic background and students' mathematics achievement in the Middle East. The countries in comparison are UAE, Syria, Qatar, Iran, Saudi Arabia, Oman, Lebanon, Jordan, and Bahrain. The study utilized data from IEA's Trends in International…

  10. Bhutanese Stakeholders' Perceptions about Multi-Grade Teaching as a Strategy for Achieving Quality Universal Primary Education

    ERIC Educational Resources Information Center

    Kucita, Pawan; Kivunja, Charles; Maxwell, T. W.; Kuyini, Bawa

    2013-01-01

    This study employed document analysis and qualitative interviews to explore the perceptions of different Bhutanese stakeholders about multi-grade teaching, which the Bhutanese Government identified as a strategy for achieving quality Universal Primary Education. The data from Ministry officials, teachers and student teachers were analyzed using…

  11. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  12. SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes

    SciTech Connect

    Nelson, G

    2015-06-15

    Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth, Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.

  13. College Students' Physical Activity and Health-Related Quality of Life: An Achievement Goal Perspective

    ERIC Educational Resources Information Center

    Zhang, Tao; Xiang, Ping; Gu, Xiangli; Rose, Melanie

    2016-01-01

    Purpose: The 2 × 2 achievement goal model, including the mastery-approach, mastery-avoidance, performance-approach, and performance-avoidance goal orientations, has recently been used to explain motivational outcomes in physical activity. This study attempted to examine the relationships among 2 × 2 achievement goal orientations, physical…

  14. Measuring Teacher Quality: Continuing the Search for Policy-Relevant Predictors of Student Achievement

    ERIC Educational Resources Information Center

    Knoeppel, Robert C.; Logan, Joyce P.; Keiser, Clare M.

    2005-01-01

    The purpose of this study was to investigate the potential viability of the variable certification by the National Board for Professional Teaching Standards (NBPTS) as a policy-relevant predictor of student achievement. Because research has identified the teacher as the most important school-related predictor of student achievement, more research…

  15. Image Quality of the Helioseismic and Magnetic Imager (HMI) Onboard the Solar Dynamics Observatory (SDO)

    NASA Technical Reports Server (NTRS)

    Wachter, R.; Schou, Jesper; Rabello-Soares, M. C.; Miles, J. W.; Duvall, T. L., Jr.; Bush, R. I.

    2011-01-01

    We describe the imaging quality of the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO) as measured during the ground calibration of the instrument. We describe the calibration techniques and report our results for the final configuration of HMI. We present the distortion, modulation transfer function, stray light,image shifts introduced by moving parts of the instrument, best focus, field curvature, and the relative alignment of the two cameras. We investigate the gain and linearity of the cameras, and present the measured flat field.

  16. Does High Quality Childcare Narrow the Achievement Gap at Two Years of Age?

    ERIC Educational Resources Information Center

    Ruzek, Erik; Burchinal, Margaret; Farkas, George; Duncan, Greg; Dang, Tran; Lee, Weilin

    2011-01-01

    The authors use the ECLS-B, a nationally-representative study of children born in 2001 to report the child care arrangements and quality characteristics for 2-year olds in the United States and to estimate the effects of differing levels of child care quality on two-year old children's cognitive development. Their goal is to test whether high…

  17. Achieving Quality within Funding Constraints: The Potential Contribution of Institutional Research. AIR 1995 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Zimmer, Bruce

    How institutional research can improve the quality of institutional performance of colleges and universities in the face of funding constraints is discussed. An example of the use of instructional research to assist in documenting institutional quality in Australia is noted. Institutional research uses data collection, analysis, and interpretation…

  18. Quality improvement initiatives in neonatal intensive care unit networks: achievements and challenges.

    PubMed

    Shah, Vibhuti; Warre, Ruth; Lee, Shoo K

    2013-01-01

    Neonatal intensive care unit networks that encompass regions, states, and even entire countries offer the perfect platform for implementing continuous quality improvement initiatives to advance the health care provided to vulnerable neonates. Through cycles of identification and implementation of best available evidence, benchmarking, and feedback of outcomes, combined with mutual collaborative learning through a network of providers, the performance of health care systems and neonatal outcomes can be improved. We use examples of successful neonatal networks from across North America to explore continuous quality improvement in the neonatal intensive care unit, including the rationale for the formation of neonatal networks, the role of networks in continuous quality improvement, quality improvement methods and outcomes, and barriers to and facilitators of quality improvement.

  19. Image quality and localization accuracy in C-arm tomosynthesis-guided head and neck surgery

    SciTech Connect

    Bachar, G.; Siewerdsen, J. H.; Daly, M. J.; Jaffray, D. A.; Irish, J. C.

    2007-12-15

    . . An overall 3D localization accuracy of {approx}2.5 mm was achieved with {theta}{sub tot}{approx} 90 deg. for most tasks. The high in-plane spatial resolution, short scanning time, and low radiation dose characteristic of tomosynthesis may enable the surgeon to collect near real-time images throughout the procedure with minimal interference to surgical workflow. Therefore, tomosynthesis could provide a useful addition to the image-guided surgery arsenal, providing on-demand, high quality image updates, complemented by CBCT at critical milestones in the surgical procedure.

  20. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  1. Image quality and high contrast improvements on VLT/NACO

    NASA Astrophysics Data System (ADS)

    Girard, Julien H. V.; O'Neal, Jared; Mawet, Dimitri; Kasper, Markus; Zins, Gérard; Neichel, Benoît; Kolb, Johann; Christiaens, Valentin; Tourneboeuf, Martin

    2012-07-01

    NACO is the famous and versatile diffraction limited NIR imager and spectrograph at the VLT with which ESO celebrated 10 years of Adaptive Optics. Since two years a substantial effort has been put in understanding and fixing issues that directly affect the image quality and the high contrast performances of the instrument. Experiments to compensate the non-common-path aberrations and recover the highest possible Strehl ratios have been carried out successfully and a plan is hereafter described to perform such measurements regularly. The drift associated to pupil tracking since 2007 was fixed in october 2011. NACO is therefore even more suited for high contrast imaging and can be used with coronagraphic masks in the image plane. Some contrast measurements are shown and discussed. The work accomplished on NACO will serve as reference for the next generation instruments on the VLT, especially the ones working at the diffraction limit and making use of angular differential imaging (i.e. SPHERE, VISIR, and possibly ERIS).

  2. Analysis of filtering techniques and image quality in pixel duplicated images

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2009-08-01

    When images undergo filtering operations, valuable information can be lost besides the intended noise or frequencies due to averaging of neighboring pixels. When the image is enlarged by duplicating pixels, such filtering effects can be reduced and more information retained, which could be critical when analyzing image content automatically. Analysis of retinal images could reveal many diseases at early stage as long as minor changes that depart from a normal retinal scan can be identified and enhanced. In this paper, typical filtering techniques are applied to an early stage diabetic retinopathy image which has undergone digital pixel duplication. The same techniques are applied to the original images for comparison. The effects of filtering are then demonstrated for both pixel duplicated and original images to show the information retention capability of pixel duplication. Image quality is computed based on published metrics. Our analysis shows that pixel duplication is effective in retaining information on smoothing operations such as mean filtering in the spatial domain, as well as lowpass and highpass filtering in the frequency domain, based on the filter window size. Blocking effects due to image compression and pixel duplication become apparent in frequency analysis.

  3. Client-Side Image Maps: Achieving Accessibility and Section 508 Compliance

    ERIC Educational Resources Information Center

    Beasley, William; Jarvis, Moana

    2004-01-01

    Image maps are a means of making a picture "clickable", so that different portions of the image can be hyperlinked to different URLS. There are two basic types of image maps: server-side and client-side. Besides requiring access to a CGI on the server, server-side image maps are undesirable from the standpoint of accessibility--creating…

  4. Using collective expert judgements to evaluate quality measures of mass spectrometry images

    PubMed Central

    Palmer, Andrew; Ovchinnikova, Ekaterina; Thuné, Mikael; Lavigne, Régis; Guével, Blandine; Dyatlov, Andrey; Vitek, Olga; Pineau, Charles; Borén, Mats; Alexandrov, Theodore

    2015-01-01

    Motivation: Imaging mass spectrometry (IMS) is a maturating technique of molecular imaging. Confidence in the reproducible quality of IMS data is essential for its integration into routine use. However, the predominant method for assessing quality is visual examination, a time consuming, unstandardized and non-scalable approach. So far, the problem of assessing the quality has only been marginally addressed and existing measures do not account for the spatial information of IMS data. Importantly, no approach exists for unbiased evaluation of potential quality measures. Results: We propose a novel approach for evaluating potential measures by creating a gold-standard set using collective expert judgements upon which we evaluated image-based measures. To produce a gold standard, we engaged 80 IMS experts, each to rate the relative quality between 52 pairs of ion images from MALDI-TOF IMS datasets of rat brain coronal sections. Experts’ optional feedback on their expertise, the task and the survey showed that (i) they had diverse backgrounds and sufficient expertise, (ii) the task was properly understood, and (iii) the survey was comprehensible. A moderate inter-rater agreement was achieved with Krippendorff’s alpha of 0.5. A gold-standard set of 634 pairs of images with accompanying ratings was constructed and showed a high agreement of 0.85. Eight families of potential measures with a range of parameters and statistical descriptors, giving 143 in total, were evaluated. Both signal-to-noise and spatial chaos-based measures performed highly with a correlation of 0.7 to 0.9 with the gold standard ratings. Moreover, we showed that a composite measure with the linear coefficients (trained on the gold standard with regularized least squares optimization and lasso) showed a strong linear correlation of 0.94 and an accuracy of 0.98 in predicting which image in a pair was of higher quality. Availability and implementation: The anonymized data collected from the survey

  5. Image fusion quality assessment based on discrete cosine transform and human visual system

    NASA Astrophysics Data System (ADS)

    Dou, Jianfang; Li, Jianxun

    2012-09-01

    With the rapid development of image fusion technology, image fusion quality evaluation plays a very important guiding role in selecting or designing image fusion algorithms. Objective image quality assessment is an interesting research subject in the field of image quality assessment. The ideal objective evaluation method is consistent with human perceptual evaluation. A new fusion image quality assessment method according with human vision system and discrete cosine transform (DCT) is introduced. Firstly, using the Sobel operator to calculate to gradient images for the source images and fused image, the gradient images are divided into 8×8 blocks and calculating the DCT coefficients for each block, and then based on the characteristics of human visual system, calculates the luminance masking, contrast masking to form the perceptual error matrix between input images and fused images. Finally, weighs the perceptual error matrix using the structural similarity. Experiments demonstrate that the new assessment maintains better consistency with human subjective perception.

  6. Optimization of Image Quality and Dose in Digital Mammography.

    PubMed

    Fausto, Agnes M F; Lopes, M C; de Sousa, M C; Furquim, Tânia A C; Mol, Anderson W; Velasco, Fermin G

    2017-04-01

    Nowadays, the optimization in digital mammography is one of the most important challenges in diagnostic radiology. The new digital technology has introduced additional elements to be considered in this scenario. A major goal of mammography is related to the detection of structures on the order of micrometers (μm) and the need to distinguish the different types of tissues, with very close density values. The diagnosis in mammography faces the difficulty that the breast tissues and pathological findings have very close linear attenuation coefficients within the energy range used in mammography. The aim of this study was to develop a methodology for optimizing exposure parameters of digital mammography based on a new Figure of Merit: FOM ≡ (IQFinv)(2)/AGD, considering the image quality and dose. The study was conducted using the digital mammography Senographe DS/GE, and CDMAM and TORMAM phantoms. The characterization of clinical practice, carried out in the mammography system under study, was performed considering different breast thicknesses, the technical parameters of exposure, and processing options of images used by the equipment's automatic exposure system. The results showed a difference between the values of the optimized parameters and those ones chosen by the automatic system of the mammography unit, specifically for small breast. The optimized exposure parameters showed better results than those obtained by the automatic system of the mammography, for the image quality parameters and its impact on detection of breast structures when analyzed by radiologists.

  7. Systematic infrared image quality improvement using deep learning based techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huaizhong; Casaseca-de-la-Higuera, Pablo; Luo, Chunbo; Wang, Qi; Kitchin, Matthew; Parmley, Andrew; Monge-Alvarez, Jesus

    2016-10-01

    Infrared thermography (IRT, or thermal video) uses thermographic cameras to detect and record radiation in the longwavelength infrared range of the electromagnetic spectrum. It allows sensing environments beyond the visual perception limitations, and thus has been widely used in many civilian and military applications. Even though current thermal cameras are able to provide high resolution and bit-depth images, there are significant challenges to be addressed in specific applications such as poor contrast, low target signature resolution, etc. This paper addresses quality improvement in IRT images for object recognition. A systematic approach based on image bias correction and deep learning is proposed to increase target signature resolution and optimise the baseline quality of inputs for object recognition. Our main objective is to maximise the useful information on the object to be detected even when the number of pixels on target is adversely small. The experimental results show that our approach can significantly improve target resolution and thus helps making object recognition more efficient in automatic target detection/recognition systems (ATD/R).

  8. Quantitative phase imaging for cell culture quality control.

    PubMed

    Kastl, Lena; Isbach, Michael; Dirksen, Dieter; Schnekenburger, Jürgen; Kemper, Björn

    2017-03-06

    The potential of quantitative phase imaging (QPI) with digital holographic microscopy (DHM) for quantification of cell culture quality was explored. Label-free QPI of detached single cells in suspension was performed by Michelson interferometer-based self-interference DHM. Two pancreatic tumor cell lines were chosen as cellular model and analyzed for refractive index, volume, and dry mass under varying culture conditions. Firstly, adequate cell numbers for reliable statistics were identified. Then, to characterize the performance and reproducibility of the method, we compared results from independently repeated measurements and quantified the cellular response to osmolality changes of the cell culture medium. Finally, it was demonstrated that the evaluation of QPI images allows the extraction of absolute cell parameters which are related to cell layer confluence states. In summary, the results show that QPI enables label-free imaging cytometry, which provides novel complementary integral biophysical data sets for sophisticated quantification of cell culture quality with minimized sample preparation. © 2017 International Society for Advancement of Cytometry.

  9. The Importance of Curriculum in Achieving Quality Child Day Care Programs.

    ERIC Educational Resources Information Center

    Dodge, Diane Trister

    1995-01-01

    Discusses the components of high quality child day care curricula, focusing on educational philosophy, program goals and objectives, the physical environment, the role of teachers and administrators, and partnerships with families. (MDM)

  10. The quality transformation: A catalyst for achieving energy`s strategic vision

    SciTech Connect

    1995-01-01

    This plan describes the initial six corporate quality goals for DOE. It also includes accompanying performance measures which will help DOE determine progress towards meeting these goals. The six goals are: (1) There is effective use of performance measurement based on regular assessment of Energy operations using the Presidential Award for Quality, the Malcolm Baldrige National Quality Award, or equivalent criteria. (2) All managers champion continuous quality improvement training for all employees through planning, attendance, and active application. (3) The Department leadership has provided the environment in which employees are enabled to satisfy customer requirements and realize their full potential. (4) The Department management practices foster employee involvement, development and recognition. (5) The Department continuously improves customer service and satisfaction, and internal and external customers recognize Energy as an excellent service provider. (6) The Department has a system which aligns strategic and operational planning with strategic intent, ensures this planning drives resource allocation, provides for regular evaluation of results, and provides feedback.

  11. High-quality, affordable healthcare in the United States--an achievable goal.

    PubMed

    Barton, Glen A

    2004-01-01

    We have evidence that quality and efficiency work together for better healthcare processes for patients while providing cost savings for employers. The complex processes and frequent disconnects among providers who care for the same patients are obviously an area of opportunity for improvement. By standardizing processes for care, we can begin to effectively measure quality and efficiency throughout the value chain. When we work as a team, focusing on quality and measurement, we can look to a future when the best possible care is efficiently and effectively delivered to patients. The future will also be brighter for employers dedicated to providing high-quality, competitive benefits for their valued employees and retirees while maintaining profitability.

  12. Automating PACS Quality Control with the Vanderbilt Image Processing Enterprise Resource

    PubMed Central

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2011-01-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption–for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and wide-variety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource–VIPER, to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months. PMID:24357910

  13. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  14. Assessment of image quality in x-ray radiography imaging using a small plasma focus device

    NASA Astrophysics Data System (ADS)

    Kanani, A.; Shirani, B.; Jabbari, I.; Mokhtari, J.

    2014-08-01

    This paper offers a comprehensive investigation of image quality parameters for a small plasma focus as a pulsed hard x-ray source for radiography applications. A set of images were captured from some metal objects and electronic circuits using a low energy plasma focus at different voltages of capacitor bank and different pressures of argon gas. The x-ray source focal spot of this device was obtained to be about 0.6 mm using the penumbra imaging method. The image quality was studied by several parameters such as image contrast, line spread function (LSF) and modulation transfer function (MTF). Results showed that the contrast changes by variations in gas pressure. The best contrast was obtained at a pressure of 0.5 mbar and 3.75 kJ stored energy. The results of x-ray dose from the device showed that about 0.6 mGy is sufficient to obtain acceptable images on the film. The measurements of LSF and MTF parameters were carried out by means of a thin stainless steel wire 0.8 mm in diameter and the cut-off frequency was obtained to be about 1.5 cycles/mm.

  15. Automated quality assessment of autonomously acquired microscopic images of fluorescently stained bacteria.

    PubMed

    Zeder, M; Kohler, E; Pernthaler, J

    2010-01-01

    Quality assessment of autonomously acquired microscopic images is an important issue in high-throughput imaging systems. For example, the presence of low quality images (>or=10%) in a dataset significantly influences the counting precision of fluorescently stained bacterial cells. We present an approach based on an artificial neural network (ANN) to assess the quality of such images. Spatially invariant estimators were extracted as ANN input data from subdivided images by low level image processing. Different ANN designs were compared and >400 ANNs were trained and tested on a set of 25,000 manually classified images. The optimal ANN featured a correct identification rate of 94% (3% false positives, 3% false negatives) and could process about 10 images per second. We compared its performance with the image quality assessment by different humans and discuss the difficulties in assigning images to the correct quality class. The computer program and the documented source code (VB.NET) are provided under General Public Licence.

  16. Underwater image quality enhancement through Rayleigh-stretching and averaging image planes

    NASA Astrophysics Data System (ADS)

    Ghani, Ahmad Shahrizan Abdul; Isa, Nor Ashidi Mat

    2014-12-01

    Visibility in underwater images is usually poor because of the attenuation of light in the water that causes low contrast and color variation. In this paper, a new approach for underwater image quality improvement is presented. The proposed method aims to improve underwater image contrast, increase image details, and reduce noise by applying a new method of using contrast stretching to produce two different images with different contrasts. The proposed method integrates the modification of the image histogram in two main color models, RGB and HSV. The histograms of the color channel in the RGB color model are modified and remapped to follow the Rayleigh distribution within certain ranges. The image is then converted to the HSV color model, and the S and V components are modified within a certain limit. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of- the-art methods in terms of contrast, details, and noise reduction. The image color also shows much improvement.

  17. Image quality in CT: From physical measurements to model observers.

    PubMed

    Verdun, F R; Racine, D; Ott, J G; Tapiovaara, M J; Toroi, P; Bochud, F O; Veldkamp, W J H; Schegerer, A; Bouwman, R W; Giron, I Hernandez; Marshall, N W; Edyvean, S

    2015-12-01

    Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.

  18. Live births achieved via IVF are increased by improvements in air quality and laboratory environment.

    PubMed

    Heitmann, Ryan J; Hill, Micah J; James, Aidita N; Schimmel, Tim; Segars, James H; Csokmay, John M; Cohen, Jacques; Payson, Mark D

    2015-09-01

    Infertility is a common disease, which causes many couples to seek treatment with assisted reproduction techniques. Many factors contribute to successful assisted reproduction technique outcomes. One important factor is laboratory environment and air quality. Our facility had the unique opportunity to compare consecutively used, but separate assisted reproduction technique laboratories, as a result of a required move. Environmental conditions were improved by strategic engineering designs. All other aspects of the IVF laboratory, including equipment, physicians, embryologists, nursing staff and protocols, were kept constant between facilities. Air quality testing showed improved air quality at the new IVF site. Embryo implantation (32.4% versus 24.3%; P < 0.01) and live birth (39.3% versus 31.8%, P < 0.05) were significantly increased in the new facility compared with the old facility. More patients met clinical criteria and underwent mandatory single embryo transfer on day 5 leading to both a reduction in multiple gestation pregnancies and increased numbers of vitrified embryos per patient with supernumerary embryos available. Improvements in IVF laboratory conditions and air quality had profound positive effects on laboratory measures and patient outcomes. This study further strengthens the importance of the laboratory environment and air quality in the success of an IVF programme.

  19. Live births achieved via IVF are increased by improvements in air quality and laboratory environment

    PubMed Central

    Heitmann, Ryan J; Hill, Micah J; James, Aidita N; Schimmel, Tim; Segars, James H; Csokmay, John M; Cohen, Jacques; Payson, Mark D

    2016-01-01

    Infertility is a common disease, which causes many couples to seek treatment with assisted reproduction techniques. Many factors contribute to successful assisted reproduction technique outcomes. One important factor is laboratory environment and air quality. Our facility had the unique opportunity to compare consecutively used, but separate assisted reproduction technique laboratories, as a result of a required move. Environmental conditions were improved by strategic engineering designs. All other aspects of the IVF laboratory, including equipment, physicians, embryologists, nursing staff and protocols, were kept constant between facilities. Air quality testing showed improved air quality at the new IVF site. Embryo implantation (32.4% versus 24.3%; P < 0.01) and live birth (39.3% versus 31.8%, P < 0.05) were significantly increased in the new facility compared with the old facility. More patients met clinical criteria and underwent mandatory single embryo transfer on day 5 leading to both a reduction in multiple gestation pregnancies and increased numbers of vitrified embryos per patient with supernumerary embryos available. Improvements in IVF laboratory conditions and air quality had profound positive effects on laboratory measures and patient outcomes. This study further strengthens the importance of the laboratory environment and air quality in the success of an IVF programme. PMID:26194882

  20. Perceived image quality with simulated segmented bifocal corrections

    PubMed Central

    Dorronsoro, Carlos; Radhakrishnan, Aiswaryah; de Gracia, Pablo; Sawides, Lucie; Marcos, Susana

    2016-01-01

    Bifocal contact or intraocular lenses use the principle of simultaneous vision to correct for presbyopia. A modified two-channel simultaneous vision simulator provided with an amplitude transmission spatial light modulator was used to optically simulate 14 segmented bifocal patterns (+ 3 diopters addition) with different far/near pupillary distributions of equal energy. Five subjects with paralyzed accommodation evaluated image quality and subjective preference through the segmented bifocal corrections. There are strong and systematic perceptual differences across the patterns, subjects and observation distances: 48% of the conditions evaluated were significantly preferred or rejected. Optical simulations (in terms of through-focus Strehl ratio from Hartmann-Shack aberrometry) accurately predicted the pattern producing the highest perceived quality in 4 out of 5 patients, both for far and near vision. These perceptual differences found arise primarily from optical grounds, but have an important neural component. PMID:27895981

  1. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  2. Quality of Education Predicts Performance on the Wide Range Achievement Test-4th Edition Word Reading Subtest

    PubMed Central

    Sayegh, Philip; Arentoft, Alyssa; Thaler, Nicholas S.; Dean, Andy C.; Thames, April D.

    2014-01-01

    The current study examined whether self-rated education quality predicts Wide Range Achievement Test-4th Edition (WRAT-4) Word Reading subtest and neurocognitive performance, and aimed to establish this subtest's construct validity as an educational quality measure. In a community-based adult sample (N = 106), we tested whether education quality both increased the prediction of Word Reading scores beyond demographic variables and predicted global neurocognitive functioning after adjusting for WRAT-4. As expected, race/ethnicity and education predicted WRAT-4 reading performance. Hierarchical regression revealed that when including education quality, the amount of WRAT-4's explained variance increased significantly, with race/ethnicity and both education quality and years as significant predictors. Finally, WRAT-4 scores, but not education quality, predicted neurocognitive performance. Results support WRAT-4 Word Reading as a valid proxy measure for education quality and a key predictor of neurocognitive performance. Future research should examine these findings in larger, more diverse samples to determine their robust nature. PMID:25404004

  3. Nonlinear filtering for character recognition in low quality document images

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2014-09-01

    Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.

  4. Comparison of two arthroscopic pump systems based on image quality.

    PubMed

    Tuijthof, G J M; van den Boomen, H; van Heerwaarden, R J; van Dijk, C N

    2008-06-01

    The effectiveness of arthroscopic pump systems has been investigated with either subjective measures or measures that were unrelated to the image quality. The goal of this study is to determine the performance of an automated pump in comparison to a gravity pump based on objective assessment of the quality of the arthroscopic view. Ten arthroscopic operations performed with a gravity pump and ten performed with an automated pump (FMS Duo system) were matched on duration of the surgery and shaver usage, type of operation, and surgical experience. Quality of the view was defined by means of the presence or absence of previously described definitions of disturbances (bleeding, turbidity, air bubbles, and loose fibrous tissue). The percentage of disturbances for all operations was assessed with a time-disturbance analysis of the recorded operations. The Mann-Whitney U test shows a significant difference in favor of the automated pump for the presence of turbidity only (Exact Sig. [2*(1-tailed Sig.)] = 0.015). Otherwise, no differences were determined (Exact Sig. [2*(1-tailed Sig.)] > 0.436). A new objective method is successfully applied to assess efficiency of pump systems based on the quality of the arthroscopic view. Important disturbances (bleeding, air bubbles, and loose fibrous tissue) are not reduced by an automated pump used in combination with a tourniquet. The most frequent disturbance turbidity is reduced by around 50%. It is questionable if this result justifies the use of an automated pump for straightforward arthroscopic knee surgeries using a tourniquet.

  5. Influence of slice overlap on positron emission tomography image quality

    NASA Astrophysics Data System (ADS)

    McKeown, Clare; Gillen, Gerry; Dempsey, Mary Frances; Findlay, Caroline

    2016-02-01

    PET scans use overlapping acquisition beds to correct for reduced sensitivity at bed edges. The optimum overlap size for the General Electric (GE) Discovery 690 has not been established. This study assesses how image quality is affected by slice overlap. Efficacy of 23% overlaps (recommended by GE) and 49% overlaps (maximum possible overlap) were specifically assessed. European Association of Nuclear Medicine (EANM) guidelines for calculating minimum injected activities based on overlap size were also reviewed. A uniform flood phantom was used to assess noise (coefficient of variation, (COV)) and voxel accuracy (activity concentrations, Bq ml-1). A NEMA (National Electrical Manufacturers Association) body phantom with hot/cold spheres in a background activity was used to assess contrast recovery coefficients (CRCs) and signal to noise ratios (SNR). Different overlap sizes and sphere-to-background ratios were assessed. COVs for 49% and 23% overlaps were 9% and 13% respectively. This increased noise was difficult to visualise on the 23% overlap images. Mean voxel activity concentrations were not affected by overlap size. No clinically significant differences in CRCs were observed. However, visibility and SNR of small, low contrast spheres (⩽13 mm diameter, 2:1 sphere to background ratio) may be affected by overlap size in low count studies if they are located in the overlap area. There was minimal detectable influence on image quality in terms of noise, mean activity concentrations or mean CRCs when comparing 23% overlap with 49% overlap. Detectability of small, low contrast lesions may be affected in low count studies—however, this is a worst-case scenario. The marginal benefits of increasing overlap from 23% to 49% are likely to be offset by increased patient scan times. A 23% overlap is therefore appropriate for clinical use. An amendment to EANM guidelines for calculating injected activities is also proposed which better reflects the effect overlap size has

  6. Nursing Homes That Increased The Proportion Of Medicare Days Achieved Gains In Quality

    PubMed Central

    Lepore, Michael; Leland, Natalie E.

    2017-01-01

    Nursing homes are increasingly serving short-stay rehabilitation residents under Medicare skilled nursing facility coverage, which is substantially more generous than Medicaid coverage for long-stay residents. In relation to increasing short-stay resident care, potential exists for beneficial or detrimental effects on long-stay resident outcomes. We employ panel multivariate regression analyses using facility fixed-effects models to determine how increasing the proportion of Medicare days in nursing homes relates to changes in quality outcomes for long-stay residents. We find increasing the proportion of Medicare days in a nursing home is significantly associated with improved quality outcomes for long-stay residents. Findings reinforce prior research indicating that quality outcomes tend to be superior in nursing homes with greater financial resources. This study bolsters arguments for financial investments in nursing homes, including increases in Medicaid payment rates, to support better care. PMID:26643633

  7. Measuring saliency in images: which experimental parameters for the assessment of image quality?

    NASA Astrophysics Data System (ADS)

    Fredembach, Clement; Woolfe, Geoff; Wang, Jue

    2012-01-01

    Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a

  8. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  9. Image quality and stability of image-guided radiotherapy (IGRT) devices: A comparative study

    PubMed Central

    Stock, Markus; Pasler, Marlies; Birkfellner, Wolfgang; Homolka, Peter; Poetter, Richard; Georg, Dietmar

    2010-01-01

    Introduction Our aim was to implement standards for quality assurance of IGRT devices used in our department and to compare their performances with that of a CT simulator. Materials and methods We investigated image quality parameters for three devices over a period of 16 months. A multislice CT was used as a benchmark and results related to noise, spatial resolution, low contrast visibility (LCV) and uniformity were compared with a cone beam CT (CBCT) at a linac and simulator. Results All devices performed well in terms of LCV and, in fact, exceeded vendor specifications. MTF was comparable between CT and linac CBCT. Integral nonuniformity was, on average, 0.002 for the CT and 0.006 for the linac CBCT. Uniformity, LCV and MTF varied depending on the protocols used for the linac CBCT. Contrast-to-noise ratio was an average of 51% higher for the CT than for the linac and simulator CBCT. No significant time trend was observed and tolerance limits were implemented. Discussion Reasonable differences in image quality between CT and CBCT were observed. Further research and development are necessary to increase image quality of commercially available CBCT devices in order for them to serve the needs for adaptive and/or online planning. PMID:19695725

  10. Image quality simulation and verification of x-ray volume imaging systems

    NASA Astrophysics Data System (ADS)

    Kroon, Han; Schoumans, Nicole; Snoeren, Ruud

    2006-03-01

    Nowadays, 2D X-ray systems are used more and more for 3-dimensional rotational X-ray imaging (3D-RX) or volume imaging, such as 3D rotational angiography. However, it is not evident that the application of settings for optimal 2D images also guarantee optimal conditions for 3D-RX reconstruction results. In particular the search for a good compromise between patient dose and IQ may lead to different results in case of 3D imaging. For this purpose we developed an additional 3D-RX module for our full-scale image quality & patient dose (IQ&PD) simulation model, with specific calculations of patient dose under rotational conditions, and contrast, sharpness and noise of 3D images. The complete X-ray system from X-ray tube up to and including the display device is modelled in separate blocks for each distinguishable component or process. The model acts as a tool for X-ray system design, image quality optimisation and patient dose reduction. The model supports the decomposition of system level requirements, and takes inherently care of the prerequisite mutual coherence between component requirements. The short calculation times enable comprehensive multi-parameter optimisation studies. The 3D-RX IQ&PD performance is validated by comparing calculation results with actual measurements performed on volume images acquired with a state-of-the-art 3D-RX system. The measurements include RXDI dose index, signal and contrast based on Hounsfield units (H and ΔH), modulation transfer function (MTF), noise variance (σ2) and contrast-to-noise ratio (CNR). Further we developed a new 3D contrast-delta (3D-CΔ) phantom with details of varying size and contrast medium material and concentration. Simulation and measurement results show a significant correlation.

  11. Correlation of the clinical and physical image quality in chest radiography for average adults with a computed radiography imaging system

    PubMed Central

    Wood, T J; Beavis, A W; Saunderson, J R

    2013-01-01

    Objective: The purpose of this study was to examine the correlation between the quality of visually graded patient (clinical) chest images and a quantitative assessment of chest phantom (physical) images acquired with a computed radiography (CR) imaging system. Methods: The results of a previously published study, in which four experienced image evaluators graded computer-simulated postero-anterior chest images using a visual grading analysis scoring (VGAS) scheme, were used for the clinical image quality measurement. Contrast-to-noise ratio (CNR) and effective dose efficiency (eDE) were used as physical image quality metrics measured in a uniform chest phantom. Although optimal values of these physical metrics for chest radiography were not derived in this work, their correlation with VGAS in images acquired without an antiscatter grid across the diagnostic range of X-ray tube voltages was determined using Pearson’s correlation coefficient. Results: Clinical and physical image quality metrics increased with decreasing tube voltage. Statistically significant correlations between VGAS and CNR (R=0.87, p<0.033) and eDE (R=0.77, p<0.008) were observed. Conclusion: Medical physics experts may use the physical image quality metrics described here in quality assurance programmes and optimisation studies with a degree of confidence that they reflect the clinical image quality in chest CR images acquired without an antiscatter grid. Advances in knowledge: A statistically significant correlation has been found between the clinical and physical image quality in CR chest imaging. The results support the value of using CNR and eDE in the evaluation of quality in clinical thorax radiography. PMID:23568362

  12. The Quality of Family Communication and Academic Achievement in Early Adolescence.

    ERIC Educational Resources Information Center

    Ullrich, Manuela; Kreppner, Kurt

    This study focused on the role of family communication patterns as they affect children's academic achievement, as determined by grades. Sixty-two children (average age: 11.6 years) and their parents were observed and videotaped in dyadic settings (mother-child, father-child) in their homes while discussing everyday topics presented to them in a…

  13. School Improvement Plans and Student Achievement: Preliminary Evidence from the Quality and Merit Project in Italy

    ERIC Educational Resources Information Center

    Caputo, Andrea; Rastelli, Valentina

    2014-01-01

    This study provides preliminary evidence from an Italian in-service training program addressed to lower secondary school teachers which supports school improvement plans (SIPs). It aims at exploring the association between characteristics/contents of SIPs and student improvement in math achievement. Pre-post standardized tests and text analysis of…

  14. Achieving Quality Education in Ghana: The Spotlight on Primary Education within the Kumasi Metropolis

    ERIC Educational Resources Information Center

    Boakye-Amponsah, Abraham; Enninful, Ebenezer Kofi; Anin, Emmanuel Kwabena; Vanderpuye, Patience

    2015-01-01

    Background: Ghana being a member of the United Nations, committed to the Universal Primary Education initiative in 2000 and has since implemented series of educational reforms to meet the target for the Millennium Development Goal (MDG) 2. Despite the numerous government interventions to achieve the MDG 2, many children in Ghana have been denied…

  15. Goal Setting in Principal Evaluation: Goal Quality and Predictors of Achievement

    ERIC Educational Resources Information Center

    Sinnema, Claire E. L.; Robinson, Viviane M. J.

    2012-01-01

    This article draws on goal-setting theory to investigate the goals set by experienced principals during their performance evaluations. While most goals were about teaching and learning, they tended to be vaguely expressed and only partially achieved. Five predictors (commitment, challenge, learning, effort, and support) explained a significant…

  16. Leveraging Quality Improvement to Achieve Student Learning Assessment Success in Higher Education

    ERIC Educational Resources Information Center

    Glenn, Nancy Gentry

    2009-01-01

    Mounting pressure for transformational change in higher education driven by technology, globalization, competition, funding shortages, and increased emphasis on accountability necessitates that universities implement reforms to demonstrate responsiveness to all stakeholders and to provide evidence of student achievement. In the face of the demand…

  17. Achieving Quality Assurance and Moving to a World Class University in the 21st Century

    ERIC Educational Resources Information Center

    Lee, Lung-Sheng Steven

    2013-01-01

    Globalization in the 21st century has brought innumerable challenges and opportunities to universities and countries. Universities are primarily concerned with how to ensure the quality of their education and how to boost their local and global competitiveness. The pressure from both international competition and public accountability on…

  18. A Guide to the Librarian's Responsibility in Achieving Quality in Lighting and Ventilation.

    ERIC Educational Resources Information Center

    Mason, Ellsworth

    1967-01-01

    Quality, not intensity, is the keystone to good library lighting. The single most important problem in lighting is glare caused by extremely intense centers of light. Multiple interfiling of light rays is a factor required in library lighting. A fixture that diffuses light well is basic when light emerges from the fixture. It scatters widely,…

  19. Using Quality Enhancement Processes to Achieve Sustainable Development and Support for Sessional Staff

    ERIC Educational Resources Information Center

    Lekkas, D.; Winning, T. A.

    2017-01-01

    Consistent with quality enhancement, we report on how we used a continuous improvement cycle to formalise and embed an academic development and support programme for our School's sessional staff. Key factors in establishing and maintaining the programme included: local change agents supported initially by institutional project funding; School…

  20. How to Achieve High-Quality Oocytes? The Key Role of Myo-Inositol and Melatonin

    PubMed Central

    Rossetti, Paola; Corrado, Francesco; Rapisarda, Agnese Maria Chiara; Condorelli, Rosita Angela; Valenti, Gaetano; Sapia, Fabrizio; Buscema, Massimo

    2016-01-01

    Assisted reproductive technologies (ART) have experienced growing interest from infertile patients seeking to become pregnant. The quality of oocytes plays a pivotal role in determining ART outcomes. Although many authors have studied how supplementation therapy may affect this important parameter for both in vivo and in vitro models, data are not yet robust enough to support firm conclusions. Regarding this last point, in this review our objective has been to evaluate the state of the art regarding supplementation with melatonin and myo-inositol in order to improve oocyte quality during ART. On the one hand, the antioxidant effect of melatonin is well known as being useful during ovulation and oocyte incubation, two occasions with a high level of oxidative stress. On the other hand, myo-inositol is important in cellular structure and in cellular signaling pathways. Our analysis suggests that the use of these two molecules may significantly improve the quality of oocytes and the quality of embryos: melatonin seems to raise the fertilization rate, and myo-inositol improves the pregnancy rate, although all published studies do not fully agree with these conclusions. However, previous studies have demonstrated that cotreatment improves these results compared with melatonin alone or myo-inositol alone. We recommend that further studies be performed in order to confirm these positive outcomes in routine ART treatment. PMID:27651794

  1. Linkage between Teacher Quality, Student Achievement,and Cognitive Skills: A Rule-Space Model

    ERIC Educational Resources Information Center

    Xin, T.; Xu, Z.; Tatsuoka, K.

    2004-01-01

    The topic of teacher credentials and student performance is revisited in an international setting using the TIMSS-99 data. The lack of consistent positive link between credentials and performance can be explained via three routes: measurement problem of ''teacher quality'' input, measurement problem of ''student outcomes'', and the production…

  2. Achieving Inclusive Excellence: Strategies for Creating Real and Sustainable Change in Quality and Diversity

    ERIC Educational Resources Information Center

    Williams, Damon A.

    2007-01-01

    Since the 1990s, the University of Connecticut has made several shifts in its culture and practice that have resulted in improved educational quality and greater success rates for students from traditionally underrepresented populations. Damon Williams shares his institution's approach. (Contains 4 notes.)

  3. Family Background, School Quality and Rural-Urban Disparities in Student Learning Achievement in Latvia

    ERIC Educational Resources Information Center

    Geske, Andrejs; Grinfelds, Andris; Dedze, Indra; Zhang, Yanhong

    2006-01-01

    Over the course of the fifteen years since 1991, Latvia has been undergoing rapid political changes from a party controlled state to a market economy. These changes have affected the system of education. The issue of quality and equity of educational outcomes is gaining increasing importance as schools are expected to adjust to the new economic…

  4. Preschool Center Care Quality Effects on Academic Achievement: An Instrumental Variables Analysis

    ERIC Educational Resources Information Center

    Auger, Anamarie; Farkas, George; Burchinal, Margaret R.; Duncan, Greg J.; Vandell, Deborah Lowe

    2014-01-01

    Much of child care research has focused on the effects of the quality of care in early childhood settings on children's school readiness skills. Although researchers increased the statistical rigor of their approaches over the past 15 years, researchers' ability to draw causal inferences has been limited because the studies are based on…

  5. Forward-Oriented Designing for Learning as a Means to Achieve Educational Quality

    ERIC Educational Resources Information Center

    Ghislandi, Patrizia M. M.; Raffaghelli, Juliana E.

    2015-01-01

    In this paper, we reflect on how Design for Learning can create the basis for a culture of educational quality. We explore the process of Design for Learning within a blended, undergraduate university course through a teacher-led inquiry approach, aiming at showing the connections between the process of Design for Learning and academic…

  6. Teacher-Student Relationship Quality Type in Elementary Grades: Effects on Trajectories for Achievement and Engagement

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Hughes, Jan N.; Kwok, Oi-Man

    2010-01-01

    Teacher, peer, and student reports of the quality of the teacher-student relationship were obtained for an ethnically diverse and academically at-risk sample of 706 second- and third-grade students. Cluster analysis identified four types of relationships based on the consistency of child reports of support and conflict in the relationship with…

  7. Achieving Equity and Quality in Japanese Elementary Schools: Balancing the Roles of State, Teachers, and Students

    ERIC Educational Resources Information Center

    Parmenter, Lynne

    2016-01-01

    The aim of this paper is to explore perspectives on equity, quality, motivation, and resilience by focusing in depth on the perspectives of educators in one small, semi-rural school in Japan. The paper is intended to provide rich, in-depth data and discussion as a way of providing insights from different perspectives into findings from large-scale…

  8. A uniform geostationary visible calibration approach to achieve a climate quality dataset

    NASA Astrophysics Data System (ADS)

    Haney, C.; Doelling, D.; Bhatt, R.; Scarino, B. R.; Gopalan, A.

    2013-12-01

    The geostationary (GEO) weather satellite visible and IR image record has surpassed 30 years. They have been preserved in the ISCCP-B1U 3-hourly dataset and other archives such as McIDAS, EUMETSAT, and NOAA CLASS. Since they were designed to aid in weather forecasting, long-term calibration stability was not a high priority. All GEO imagers lack onboard visible calibration and suffer from optical degradation after they are launched. In order to piece together the 35+ GEO satellite record both in time and space, a uniform calibration approach is desired to remove individual GEO temporal trends, as well as GEO spectral band differences. Otherwise, any artificial discontinuities caused by sequential GEO satellite records or spurious temporal trends caused by optical degradation may be interpreted as a change in climate. The approach relies on multiple independent methods to reduce the overall uncertainty of the GEO calibration coefficients. Consistency among methods validates the approach. During the MODIS record (2000 to the present) the GEO satellites are inter-calibrated against MODIS using ray-matched or bore-sighted radiance pairs. The MODIS and the VIIRS follow on instruments are equipped with onboard calibration thereby providing a stable calibration reference. The GEO spectral band differences are accounted for using a Spectral Band Adjustment Factor (SBAF) based on hyper-spectral SCIAMACHY data. During the pre-MODIS era, invariant earth targets of deserts and deep convective clouds (DCC) are used. Since GEO imagers have maintained their imaging scan schedules, GEO desert and DCC bidirectional reflectance distribution functions (BRDF) can be constructed and validated during the MODIS era. The BRDF models can then be applied to historical GEO imagers. Consistency among desert and DCC GEO calibration gains validates the approach. This approach has been applied to the GEO record beginning in 1985 and the results will be presented at the meeting.

  9. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min-1 with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels-1.

  10. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    SciTech Connect

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  11. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization.

    PubMed

    Hutcheson, Joshua A; Majid, Aneeka A; Powless, Amy J; Muldoon, Timothy J

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min(-1) with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels(-1).

  12. Relations between local and global perceptual image quality and visual masking

    NASA Astrophysics Data System (ADS)

    Alam, Md Mushfiqul; Patil, Pranita; Hagan, Martin T.; Chandler, Damon M.

    2015-03-01

    Perceptual quality assessment of digital images and videos are important for various image-processing applications. For assessing the image quality, researchers have often used the idea of visual masking (or distortion visibility) to design image-quality predictors specifically for the near-threshold distortions. However, it is still unknown that while assessing the quality of natural images, how the local distortion visibilities relate with the local quality scores. Furthermore, the summing mechanism of the local quality scores to predict the global quality scores is also crucial for better prediction of the perceptual image quality. In this paper, the local and global qualities of six images and six distortion levels were measured using subjective experiments. Gabor-noise target was used as distortion in the quality-assessment experiments to be consistent with our previous study [Alam, Vilankar, Field, and Chandler, Journal of Vision, 2014], in which the local root-mean-square contrast detection thresholds of detecting the Gabor-noise target were measured at each spatial location of the undistorted images. Comparison of the results of this quality-assessment experiment and the previous detection experiment shows that masking predicted the local quality scores more than 95% correctly above 15 dB threshold within 5% subject scores. Furthermore, it was found that an approximate squared summation of local-quality scores predicted the global quality scores suitably (Spearman rank-order correlation 0:97).

  13. A quality assurance framework for the fully automated and objective evaluation of image quality in cone-beam computed tomography

    SciTech Connect

    Steiding, Christian; Kolditz, Daniel; Kalender, Willi A.

    2014-03-15

    Purpose: Thousands of cone-beam computed tomography (CBCT) scanners for vascular, maxillofacial, neurological, and body imaging are in clinical use today, but there is no consensus on uniform acceptance and constancy testing for image quality (IQ) and dose yet. The authors developed a quality assurance (QA) framework for fully automated and time-efficient performance evaluation of these systems. In addition, the dependence of objective Fourier-based IQ metrics on direction and position in 3D volumes was investigated for CBCT. Methods: The authors designed a dedicated QA phantom 10 cm in length consisting of five compartments, each with a diameter of 10 cm, and an optional extension ring 16 cm in diameter. A homogeneous section of water-equivalent material allows measuring CT value accuracy, image noise and uniformity, and multidimensional global and local noise power spectra (NPS). For the quantitative determination of 3D high-contrast spatial resolution, the modulation transfer function (MTF) of centrally and peripherally positioned aluminum spheres was computed from edge profiles. Additional in-plane and axial resolution patterns were used to assess resolution qualitatively. The characterization of low-contrast detectability as well as CT value linearity and artifact behavior was tested by utilizing sections with soft-tissue-equivalent and metallic inserts. For an automated QA procedure, a phantom detection algorithm was implemented. All tests used in the dedicated QA program were initially verified in simulation studies and experimentally confirmed on a clinical dental CBCT system. Results: The automated IQ evaluation of volume data sets of the dental CBCT system was achieved with the proposed phantom requiring only one scan for the determination of all desired parameters. Typically, less than 5 min were needed for phantom set-up, scanning, and data analysis. Quantitative evaluation of system performance over time by comparison to previous examinations was also

  14. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  15. Quality Enhancement and Nerve Fibre Layer Artefacts Removal in Retina Fundus Images by Off Axis Imaging

    SciTech Connect

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul; Li, Yaquin; Tobin Jr, Kenneth William; Chaum, Edward

    2011-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relative low cost, these cameras are employed worldwide by retina specialists to diagnose diabetic retinopathy and other degenerative diseases. Even with relative ease of use, the images produced by these systems sometimes suffer from reflectance artefacts mainly due to the nerve fibre layer (NFL) or other camera lens related reflections. We propose a technique that employs multiple fundus images acquired from the same patient to obtain a single higher quality image without these reflectance artefacts. The removal of bright artefacts, and particularly of NFL reflectance, can have great benefits for the reduction of false positives in the detection of retinal lesions such as exudate, drusens and cotton wool spots by automatic systems or manual inspection. If enough redundant information is provided by the multiple images, this technique also compensates for a suboptimal illumination. The fundus images are acquired in straightforward but unorthodox manner, i.e. the stare point of the patient is changed between each shot but the camera is kept fixed. Between each shot, the apparent shape and position of all the retinal structures that do not exhibit isotropic reflectance (e.g. bright artefacts) change. This physical effect is exploited by our algorithm in order to extract the pixels belonging to the inner layers of the retina, hence obtaining a single artefacts-free image.

  16. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  17. Comprehensive model for predicting perceptual image quality of smart mobile devices.

    PubMed

    Gong, Rui; Xu, Haisong; Luo, M R; Li, Haifeng

    2015-01-01

    An image quality model for smart mobile devices was proposed based on visual assessments of several image quality attributes. A series of psychophysical experiments were carried out on two kinds of smart mobile devices, i.e., smart phones and tablet computers, in which naturalness, colorfulness, brightness, contrast, sharpness, clearness, and overall image quality were visually evaluated under three lighting environments via categorical judgment method for various application types of test images. On the basis of Pearson correlation coefficients and factor analysis, the overall image quality could first be predicted by its two constituent attributes with multiple linear regression functions for different types of images, respectively, and then the mathematical expressions were built to link the constituent image quality attributes with the physical parameters of smart mobile devices and image appearance factors. The procedure and algorithms were applicable to various smart mobile devices, different lighting conditions, and multiple types of images, and performance was verified by the visual data.

  18. Cone beam computed tomography radiation dose and image quality assessments.

    PubMed

    Lofthag-Hansen, Sara

    2010-01-01

    Diagnostic radiology has undergone profound changes in the last 30 years. New technologies are available to the dental field, cone beam computed tomography (CBCT) as one of the most important. CBCT is a catch-all term for a technology comprising a variety of machines differing in many respects: patient positioning, volume size (FOV), radiation quality, image capturing and reconstruction, image resolution and radiation dose. When new technology is introduced one must make sure that diagnostic accuracy is better or at least as good as the one it can be expected to replace. The CBCT brand tested was two versions of Accuitomo (Morita, Japan): 3D Accuitomo with an image intensifier as detector, FOV 3 cm x 4 cm and 3D Accuitomo FPD with a flat panel detector, FOVs 4 cm x 4 cm and 6 cm x 6 cm. The 3D Accuitomo was compared with intra-oral radiography for endodontic diagnosis in 35 patients with 46 teeth analyzed, of which 41 were endodontically treated. Three observers assessed the images by consensus. The result showed that CBCT imaging was superior with a higher number of teeth diagnosed with periapical lesions (42 vs 32 teeth). When evaluating 3D Accuitomo examinations in the posterior mandible in 30 patients, visibility of marginal bone crest and mandibular canal, important anatomic structures for implant planning, was high with good observer agreement among seven observers. Radiographic techniques have to be evaluated concerning radiation dose, which requires well-defined and easy-to-use methods. Two methods: CT dose index (CTDI), prevailing method for CT units, and dose-area product (DAP) were evaluated for calculating effective dose (E) for both units. An asymmetric dose distribution was revealed when a clinical situation was simulated. Hence, the CTDI method was not applicable for these units with small FOVs. Based on DAP values from 90 patient examinations effective dose was estimated for three diagnostic tasks: implant planning in posterior mandible and

  19. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  20. Functional imaging using the retinal function imager: direct imaging of blood velocity, achieving fluorescein angiography-like images without any contrast agent, qualitative oximetry, and functional metabolic signals.

    PubMed

    Izhaky, David; Nelson, Darin A; Burgansky-Eliash, Zvia; Grinvald, Amiram

    2009-07-01

    The Retinal Function Imager (RFI; Optical Imaging, Rehovot, Israel) is a unique, noninvasive multiparameter functional imaging instrument that directly measures hemodynamic parameters such as retinal blood-flow velocity, oximetric state, and metabolic responses to photic activation. In addition, it allows capillary perfusion mapping without any contrast agent. These parameters of retinal function are degraded by retinal abnormalities. This review delineates the development of these parameters and demonstrates their clinical applicability for noninvasive detection of retinal function in several modalities. The results suggest multiple clinical applications for early diagnosis of retinal diseases and possible critical guidance of their treatment.

  1. Prediction of water quality parameters from SAR images by using multivariate and texture analysis models

    NASA Astrophysics Data System (ADS)

    Shareef, Muntadher A.; Toumi, Abdelmalek; Khenchaf, Ali

    2014-10-01

    Remote sensing is one of the most important tools for monitoring and assisting to estimate and predict Water Quality parameters (WQPs). The traditional methods used for monitoring pollutants are generally relied on optical images. In this paper, we present a new approach based on the Synthetic Aperture Radar (SAR) images which we used to map the region of interest and to estimate the WQPs. To achieve this estimation quality, the texture analysis is exploited to improve the regression models. These models are established and developed to estimate six common concerned water quality parameters from texture parameters extracted from Terra SAR-X data. In this purpose, the Gray Level Cooccurrence Matrix (GLCM) is used to estimate several regression models using six texture parameters such as contrast, correlation, energy, homogeneity, entropy and variance. For each predicted model, an accuracy value is computed from the probability value given by the regression analysis model of each parameter. In order to validate our approach, we have used tow dataset of water region for training and test process. To evaluate and validate the proposed model, we applied it on the training set. In the last stage, we used the fuzzy K-means clustering to generalize the water quality estimation on the whole of water region extracted from segmented Terra SAR-X image. Also, the obtained results showed that there are a good statistical correlation between the in situ water quality and Terra SAR-X data, and also demonstrated that the characteristics obtained by texture analysis are able to monitor and predicate the distribution of WQPs in large rivers with high accuracy.

  2. Optimization of exposure in panoramic radiography while maintaining image quality using adaptive filtering.

    PubMed

    Svenson, Björn; Larsson, Lars; Båth, Magnus

    2016-01-01

    Objective The purpose of the present study was to investigate the potential of using advanced external adaptive image processing for maintaining image quality while reducing exposure in dental panoramic storage phosphor plate (SPP) radiography. Materials and methods Thirty-seven SPP radiographs of a skull phantom were acquired using a Scanora panoramic X-ray machine with various tube load, tube voltage, SPP sensitivity and filtration settings. The radiographs were processed using General Operator Processor (GOP) technology. Fifteen dentists, all within the dental radiology field, compared the structural image quality of each radiograph with a reference image on a 5-point rating scale in a visual grading characteristics (VGC) study. The reference image was acquired with the acquisition parameters commonly used in daily operation (70 kVp, 150 mAs and sensitivity class 200) and processed using the standard process parameters supplied by the modality vendor. Results All GOP-processed images with similar (or higher) dose as the reference image resulted in higher image quality than the reference. All GOP-processed images with similar image quality as the reference image were acquired at a lower dose than the reference. This indicates that the external image processing improved the image quality compared with the standard processing. Regarding acquisition parameters, no strong dependency of the image quality on the radiation quality was seen and the image quality was mainly affected by the dose. Conclusions The present study indicates that advanced external adaptive image processing may be beneficial in panoramic radiography for increasing the image quality of SPP radiographs or for reducing the exposure while maintaining image quality.

  3. Diffusion imaging quality control via entropy of principal direction distribution

    PubMed Central

    Oguz, Ipek; Smith, Rachel G.; Verde, Audrey R.; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L.; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C.; Paterson, Sarah; Evans, Alan C.; Styner, Martin A.

    2013-01-01

    Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here

  4. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes.

    PubMed

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-10-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement.

  5. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes

    PubMed Central

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-01-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement. PMID:26178332

  6. Six iterative reconstruction algorithms in brain CT: a phantom study on image quality at different radiation dose levels

    PubMed Central

    Olsson, M-L; Siemund, R; Stålhammar, F; Björkman-Burtscher, I M; Söderberg, M

    2013-01-01

    Objective: To evaluate the image quality produced by six different iterative reconstruction (IR) algorithms in four CT systems in the setting of brain CT, using different radiation dose levels and iterative image optimisation levels. Methods: An image quality phantom, supplied with a bone mimicking annulus, was examined using four CT systems from different vendors and four radiation dose levels. Acquisitions were reconstructed using conventional filtered back-projection (FBP), three levels of statistical IR and, when available, a model-based IR algorithm. The evaluated image quality parameters were CT numbers, uniformity, noise, noise-power spectra, low-contrast resolution and spatial resolution. Results: Compared with FBP, noise reduction was achieved by all six IR algorithms at all radiation dose levels, with further improvement seen at higher IR levels. Noise-power spectra revealed changes in noise distribution relative to the FBP for most statistical IR algorithms, especially the two model-based IR algorithms. Compared with FBP, variable degrees of improvements were seen in both objective and subjective low-contrast resolutions for all IR algorithms. Spatial resolution was improved with both model-based IR algorithms and one of the statistical IR algorithms. Conclusion: The four statistical IR algorithms evaluated in the study all improved the general image quality compared with FBP, with improvement seen for most or all evaluated quality criteria. Further improvement was achieved with one of the model-based IR algorithms. Advances in knowledge: The six evaluated IR algorithms all improve the image quality in brain CT but show different strengths and weaknesses. PMID:24049128

  7. Study on effects of scan parameters on the image quality and tip wear in AFM tapping mode.

    PubMed

    Xue, Bo; Yan, Yongda; Hu, Zhenjiang; Zhao, Xuesen

    2014-01-01

    Due to the tip-sample interaction which is the measurement principle of Atomic Force Microscope (AFM), tip wear constantly occurs during scanning. The blunt tip caused by the wear process makes more tip geometry information involved in the image, and correspondingly it increases the measurement error. In the present study, the scan parameters of AFM in tapping mode which affect the wear of single crystal silicon tips, such as the approaching rate, the scan rate, the scan amplitude, and the integral gain are investigated. By proposing a parameter reflecting the imaging quality, the tip state tracing the sample surface is evaluated quantitatively. The influences of scan parameters on this imaging quality parameter are obtained by experiments. Finally, in order to achieve the perfect images with little tip wear influence, tip wear experiments are carried out and then the optimal parameter settings which can lighten the tip wear are obtained.

  8. Influence of limited random-phase of objects on the image quality of 3D holographic display

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Juan; Yang, Minqiang; Li, Xin; Xue, Gaolei; Wang, Yongtian

    2017-02-01

    Limited-random-phase time average method is proposed to suppress the speckle noise of three dimensional (3D) holographic display. The initial phase and the range of the random phase are studied, as well as their influence on the optical quality of the reconstructed images, and the appropriate initial phase ranges on object surfaces are obtained. Numerical simulations and optical experiments with 2D and 3D reconstructed images are performed, where the objects with limited phase range can suppress the speckle noise in reconstructed images effectively. It is expected to achieve high-quality reconstructed images in 2D or 3D display in the future because of its effectiveness and simplicity.

  9. Quality of Research Design Moderates Effects of Grade Retention on Achievement: A Meta-analytic, Multi-level Analysis

    PubMed Central

    Allen, Chiharu S.; Chen, Qi; Willson, Victor L.; Hughes, Jan N.

    2010-01-01

    The present meta-analysis examined the effect of grade retention on academic outcomes and investigated systemic sources of variability in effect sizes. Using multi-level modeling, we investigated characteristics of 207 effect sizes across 22 studies published between 1990 and 2007 at two levels: the study (between) and individual (within) levels. Design quality was a study-level variable. Individual level variables were median grade retained and median number of years post retention. Quality of design was associated with less negative effects. Studies employing middle to high methodological designs yielded effect sizes not statistically significantly different from zero and 0.34 higher (more positive) than studies with low design quality. Years post retention was negatively associated with retention effects, and this effect was stronger for studies using grade comparisons versus age comparisons. Results challenge the widely held view that retention has a negative impact on achievement. Suggestions for future research are discussed. PMID:20717492

  10. Achieving quality and fiscal outcomes in patient care: the clinical mentor care delivery model.

    PubMed

    Burritt, Joan E; Wallace, Patricia; Steckel, Cynthia; Hunter, Anita

    2007-12-01

    Contemporary patient care requires sophisticated clinical judgment and reasoning in all nurses. However, the level of development regarding these abilities varies within a staff. Traditional care models lack the structure and process to close the expertise gap creating potential patient safety risks. In an innovative model, senior, experienced nurses were relieved of direct patient care assignments to oversee nursing care delivery. Evaluation of the model showed significant impact on quality and fiscal outcomes.

  11. Comparison of no-reference image quality assessment machine learning-based algorithms on compressed images

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Saadane, AbdelHakim; Fernandez-Maloigne, Christine

    2015-01-01

    No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications. The main goal of this paper is to perform a comparative study of seven well known no-reference learning-based image quality algorithms. To test the performance of these algorithms, three public databases are used. As a first step, the trial algorithms are compared when no new learning is performed. The second step investigates how the training set influences the results. The Spearman Rank Ordered Correlation Coefficient (SROCC) is utilized to measure and compare the performance. In addition, an hypothesis test is conducted to evaluate the statistical significance of performance of each tested algorithm.

  12. Damage and quality assessment in wheat by NIR hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Delwiche, Stephen R.; Kim, Moon S.; Dong, Yanhong

    2010-04-01

    Fusarium head blight is a fungal disease that affects the world's small grains, such as wheat and barley. Attacking the spikelets during development, the fungus causes a reduction of yield and grain of poorer processing quality. It also is a health concern because of the secondary metabolite, deoxynivalenol, which often accompanies the fungus. While chemical methods exist to measure the concentration of the mycotoxin and manual visual inspection is used to ascertain the level of Fusarium damage, research has been active in developing fast, optically based techniques that can assess this form of damage. In the current study a near-infrared (1000-1700 nm) hyperspectral image system was assembled and applied to Fusarium-damaged kernel recognition. With anticipation of an eventual multispectral imaging system design, 5 wavelengths were manually selected from a pool of 146 images as the most promising, such that when combined in pairs or triplets, Fusarium damage could be identified. We present the results of two pairs of wavelengths [(1199, 1474 nm) and (1315, 1474 nm)] whose reflectance values produced adequate separation of kernels of healthy appearance (i.e., asymptomatic condition) from kernels possessing Fusarium damage.

  13. Broadcast quality 3840 × 2160 color imager operating at 30 frames/s

    NASA Astrophysics Data System (ADS)

    Iodice, Robert M.; Joyner, Michael; Hong, Canaan S.; Parker, David P.

    2003-05-01

    Both the active column sensor (ACS) pixel sensing technology and the PVS-Bus multiplexer technology have been applied to a color imaging array to produce an extraordinarily high resolution, color imager of greater than 8 million pixels with image quality and speed suitable for a broad range of applications including digital cinema, broadcast video and security/surveillance. The imager has been realized in a standard 0.5 μm CMOS technology using double-poly and triple metal (DP3M) construction and features a pixel size of 7.5 μm by 7.5 μm. Mask level stitching enables the construction of a high quality, low dark current imager having an array size of 16.2 mm by 28.8 mm. The image array aspect ratio is 16:9 with a diagonal of 33 mm making it suitable for HDTV applications using optics designed for 35 mm still photography. A high modulation transfer function (MTF) is maintained by utilizing micro lenses along with an RGB Bayer pattern color filter array. The frame rate of 30 frames/s in progressive mode is achieved using the PVS-Bus technology with eight output ports, which corresponds to an overall pixel rate of 248 M-pixel per second. High dynamic range and low fixed pattern noise are achieved by combining photodiode pixels with the ACS pixel sensing technology and a modified correlated double-sampling (CDS) technique. Exposure time can be programmed by the user from a full frame of integration to as low as a single line of integration in steps of 14.8 μs. The output gain is programmable from 0dB to +12dB in 256 steps; the output offset is also programmable over a range of 765 mV in 256 steps. This QuadHDTV imager has been delivered to customers and has been demonstrated in a prototype camera that provides full resolution video with all image processing on board. The prototype camera operates at 2160p24, 2160p30 and 2160i60.

  14. “Lucky Averaging”: Quality improvement on Adaptive Optics Scanning Laser Ophthalmoscope Images

    PubMed Central

    Huang, Gang; Zhong, Zhangyi; Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    Adaptive optics(AO) has greatly improved retinal image resolution. However, even with AO, temporal and spatial variations in image quality still occur due to wavefront fluctuations, intra-frame focus shifts and other factors. As a result, aligning and averaging images can produce a mean image that has lower resolution or contrast than the best images within a sequence. To address this, we propose an image post-processing scheme called “lucky averaging”, analogous to lucky imaging (Fried, 1978) based on computing the best local contrast over time. Results from eye data demonstrate improvements in image quality. PMID:21964097

  15. Comparison of image quality of different iodine isotopes (I-123, I-124, and I-131).

    PubMed

    Rault, Erwann; Vandenberghe, Stefaan; Van Holen, Roel; De Beenhouwer, Jan; Staelens, Steven; Lemahieu, Ignace

    2007-06-01

    I-131 is a frequently used isotope for radionuclide therapy. This technique for cancer treatment requires a pre-therapeutic dosimetric study. The latter is usually performed (for this radionuclide) by directly imaging the uptake of the therapeutic radionuclide in the body or by replacing it by one of its isotopes, which are more suitable for imaging. This study aimed to compare the image quality that can be achieved by three iodine isotopes: I-131 and I-123 for single-photon emission computed tomography imaging, and I-124 for positron emission tomography imaging. The imaging characteristics of each isotope were investigated by simulated data. Their spectrums, point-spread functions, and contrast-recovery curves were drawn and compared. I-131 was imaged with a high-energy all-purpose (HEAP) collimator, whereas two collimators were compared for I-123: low-energy high-resolution (LEHR) and medium energy (ME). No mechanical collimation was used for I-124. The influence of small high-energy peaks (>0.1%) on the main energy window contamination were evaluated. Furthermore, the effect of a scattering medium was investigated and the triple energy window (TEW) correction was used for spectral-based scatter correction. Results showed that I-123 gave the best results with a LEHR collimator when the scatter correction was applied. Without correction, the ME collimator reduced the effects of high-energy contamination. I-131 offered the worst results. This can be explained by the large amount of septal penetration from the photopeak and by the collimator, which gave a low spatial resolution. I-124 gave the best imaging properties owing to its electronic collimation (high sensitivity) and a short coincidence time window.

  16. Copper filtration in pediatric digital X-ray imaging: its impact on image quality and dose.

    PubMed

    Brosi, Philippe; Stuessi, Anja; Verdun, Francis R; Vock, Peter; Wolf, Rainer

    2011-07-01

    The effect of copper (Cu) filtration on image quality and dose in different digital X-ray systems was investigated. Two computed radiography systems and one digital radiography detector were used. Three different polymethylmethacrylate blocks simulated the pediatric body. The effect of Cu filters of 0.1, 0.2, and 0.3 mm thickness on the entrance surface dose (ESD) and the corresponding effective doses (EDs) were measured at tube voltages of 60, 66, and 73 kV. Image quality was evaluated in a contrast-detail phantom with an automated analyzer software. Cu filters of 0.1, 0.2, and 0.3 mm thickness decreased the ESD by 25-32%, 32-39%, and 40-44%, respectively, the ranges depending on the respective tube voltages. There was no consistent decline in image quality due to increasing Cu filtration. The estimated ED of anterior-posterior (AP) chest projections was reduced by up to 23%. No relevant reduction in the ED was noted in AP radiographs of the abdomen and pelvis or in posterior-anterior radiographs of the chest. Cu filtration reduces the ESD, but generally does not reduce the effective dose. Cu filters can help protect radiosensitive superficial organs, such as the mammary glands in AP chest projections.

  17. Image Quality Performance Measurement of the microPET Focus 120

    NASA Astrophysics Data System (ADS)

    Ballado, Fernando Trejo; López, Nayelli Ortega; Flores, Rafael Ojeda; Ávila-Rodríguez, Miguel A.

    2010-12-01

    The aim of this work is to evaluate the characteristics involved in the image reconstruction of the microPET Focus 120. For this evaluation were used two different phantoms; a miniature hot-rod Derenzo phantom and a National Electrical Manufacturers Association (NEMA) NU4-2008 image quality (IQ) phantom. The best image quality was obtained when using OSEM3D as the reconstruction method reaching a spatial resolution of 1.5 mm with the Derenzo phantom filled with 18F. Image quality test results indicate a superior image quality for the Focus 120 when compared to previous microPET models.

  18. Task-based measures of image quality and their relation to radiation dose and patient risk

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Hoeschen, Christoph; Kupinski, Matthew A.; Little, Mark P.

    2015-01-01

    The theory of task-based assessment of image quality is reviewed in the context of imaging with ionizing radiation, and objective figures of merit (FOMs) for image quality are summarized. The variation of the FOMs with the task, the observer and especially with the mean number of photons recorded in the image is discussed. Then various standard methods for specifying radiation dose are reviewed and related to the mean number of photons in the image and hence to image quality. Current knowledge of the relation between local radiation dose and the risk of various adverse effects is summarized, and some graphical depictions of the tradeoffs between image quality and risk are introduced. Then various dose-reduction strategies are discussed in terms of their effect on task-based measures of image quality. PMID:25564960

  19. An education-service partnership to achieve safety and quality improvement competencies in nursing.

    PubMed

    Fater, Kerry H; Ready, Robert

    2011-12-01

    The Institute of Medicine recommends that educational and service organizations develop partnerships to promote and prioritize competency development for nurses. This article describes a collaborative project between a college of nursing and a regional health care system. The project's aim was to foster the development of safety and quality by creating a curriculum based on the 10 core competencies identified by the Massachusetts Department of Higher Education Nurse of the Future Competency Committee. To accomplish this goal, learning experiences were created to address competency development. Competency-based education will help ensure that nursing graduates are adequately prepared to meet the current and future health care needs of our population.

  20. Added copper filtration in digital paediatric double-contrast colon examinations: effects on radiation dose and image quality.

    PubMed

    Hansson, B; Finnbogason, T; Schuwert, P; Persliden, J

    1997-01-01

    Paediatric double-contrast barium enema examinations are usually performed at high tube voltage, 102-105 kV. The aim of this study was to investigate how much the effective dose to the child could be reduced by increasing the X-ray energy further by adding copper filter in the beam, and if this dose reduction could be achieved without endangering image quality. Organ doses to an anthropomorphic phantom simulating a 1-year-old child was measured using thermoluminescence dosimetry for assessment of the effective dose and this value was compared with the energy imparted which was obtained from kerma-area product measurements. To verify that the image quality achieved with this added filtration was still diagnostically acceptable, the study included 15 patient examinations. Since the increased X-ray energy will most probably affect low-contrast objects, image quality was also evaluated with two different phantoms containing low-contrast objects. Effective dose for a complete examination can be decreased 44 % and energy imparted 77 % when a 0.3-mm copper filter is inserted in the beam at tube voltage 102 kV. The patient study did not show any significant deterioration of image quality, whereas phantom measurements of contrast-detail resolution and signal-to-noise ratio was marginally impaired by the added copper filtration. This technique is now in clinical practice for paediatric colon examinations.

  1. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases.

  2. Evaluation of image quality of digital photo documentation of female genital injuries following sexual assault.

    PubMed

    Ernst, E J; Speck, Patricia M; Fitzpatrick, Joyce J

    2011-12-01

    With the patient's consent, physical injuries sustained in a sexual assault are evaluated and treated by the sexual assault nurse examiner (SANE) and documented on preprinted traumagrams and with photographs. Digital imaging is now available to the SANE for documentation of sexual assault injuries, but studies of the image quality of forensic digital imaging of female genital injuries after sexual assault were not found in the literature. The Photo Documentation Image Quality Scoring System (PDIQSS) was developed to rate the image quality of digital photo documentation of female genital injuries after sexual assault. Three expert observers performed evaluations on 30 separate images at two points in time. An image quality score, the sum of eight integral technical and anatomical attributes on the PDIQSS, was obtained for each image. Individual image quality ratings, defined by rating image quality for each of the data, were also determined. The results demonstrated a high level of image quality and agreement when measured in all dimensions. For the SANE in clinical practice, the results of this study indicate that a high degree of agreement exists between expert observers when using the PDIQSS to rate image quality of individual digital photographs of female genital injuries after sexual assault.

  3. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  4. Nuclear imaging of the breast: translating achievements in instrumentation into clinical use.

    PubMed

    Hruska, Carrie B; O'Connor, Michael K

    2013-05-01

    Approaches to imaging the breast with nuclear medicine and∕or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed.

  5. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    PubMed Central

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-01-01

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed. PMID:23635248

  6. Quality control of VMAT synchronization using portal imaging.

    PubMed

    Bedford, James L; Chajecka-Szczygielska, Honorata; Thomas, Michael D R

    2015-01-08

    For accurate delivery of volumetric-modulated arc therapy (VMAT), the gantry position should be synchronized with the multileaf collimator (MLC) leaf positions and the dose rate. This study, therefore, aims to implement quality control (QC) of VMAT synchronization, with as few arcs as possible and with minimal data handling time, using portal imaging. A steel bar of diameter 12 mm is accurately positioned in the G-T direction, 80 mm laterally from the isocenter. An arc prescription irradiates the bar with a 16 mm × 220 mm field during a complete 360° arc, so as to cast a shadow of the bar onto the portal imager. This results in a sinusoidal sweep of the field and shadow across the portal imager and back. The method is evaluated by simulating gantry position errors of 1°-9° at one control point, dose errors of 2 monitor units to 20 monitor units (MU) at one control point (0.3%-3% overall), and MLC leaf position errors of 1 mm - 6 mm at one control point. Inhomogeneity metrics are defined to characterize the synchronization of all leaves and of individual leaves with respect to the complete set. Typical behavior is also investigated for three models of accelerator. In the absence of simulated errors, the integrated images show uniformity, and with simulated delivery errors, irregular patterns appear. The inhomogeneity metrics increase by 67% due to a 4° gantry position error, 33% due to an 8 MU (1.25%) dose error, and 70% due to a 2 mm MLC leaf position error. The method is more sensitive to errors at gantry angle 90°/270° than at 0°/180° due to the geometry of the test. This method provides fast and effective VMAT QC suitable for inclusion in a monthly accelerator QC program. The test is able to detect errors in the delivery of individual control points, with the possibility of using movie images to further investigate suspicious image features.

  7. CLINICAL AUDIT OF IMAGE QUALITY IN RADIOLOGY USING VISUAL GRADING CHARACTERISTICS ANALYSIS.

    PubMed

    Tesselaar, Erik; Dahlström, Nils; Sandborg, Michael

    2016-06-01

    The aim of this work was to assess whether an audit of clinical image quality could be efficiently implemented within a limited time frame using visual grading characteristics (VGC) analysis. Lumbar spine radiography, bedside chest radiography and abdominal CT were selected. For each examination, images were acquired or reconstructed in two ways. Twenty images per examination were assessed by 40 radiology residents using visual grading of image criteria. The results were analysed using VGC. Inter-observer reliability was assessed. The results of the visual grading analysis were consistent with expected outcomes. The inter-observer reliability was moderate to good and correlated with perceived image quality (r(2) = 0.47). The median observation time per image or image series was within 2 min. These results suggest that the use of visual grading of image criteria to assess the quality of radiographs provides a rapid method for performing an image quality audit in a clinical environment.

  8. Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.

    PubMed

    Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida

    2016-06-28

    During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.

  9. Effect of labeling density and time post labeling on quality of antibody-based super resolution microscopy images

    NASA Astrophysics Data System (ADS)

    Bittel, Amy M.; Saldivar, Isaac; Dolman, Nicholas; Nickerson, Andrew K.; Lin, Li-Jung; Nan, Xiaolin; Gibbs, Summer L.

    2015-03-01

    Super resolution microscopy (SRM) has overcome the historic spatial resolution limit of light microscopy, enabling fluorescence visualization of intracellular structures and multi-protein complexes at the nanometer scale. Using single-molecule localization microscopy, the precise location of a stochastically activated population of photoswitchable fluorophores is determined during the collection of many images to form a single image with resolution of ~10-20 nm, an order of magnitude improvement over conventional microscopy. One of the key factors in achieving such resolution with single-molecule SRM is the ability to accurately locate each fluorophore while it emits photons. Image quality is also related to appropriate labeling density of the entity of interest within the sample. While ease of detection improves as entities are labeled with more fluorophores and have increased fluorescence signal, there is potential to reduce localization precision, and hence resolution, with an increased number of fluorophores that are on at the same time in the same relative vicinity. In the current work, fixed microtubules were antibody labeled using secondary antibodies prepared with a range of Alexa Fluor 647 conjugation ratios to compare image quality of microtubules to the fluorophore labeling density. It was found that image quality changed with both the fluorophore labeling density and time between completion of labeling and performance of imaging study, with certain fluorophore to protein ratios giving optimal imaging results.

  10. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    PubMed

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-07-01

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (<5 HU) and target CT number variations (<1 HU). The radiation

  11. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    PubMed

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-07-08

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4 iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (< 5 HU) and target CT number variations (< 1 HU). The radiation

  12. The image quality of ion computed tomography at clinical imaging dose levels

    SciTech Connect

    Hansen, David C.; Bassler, Niels; Sørensen, Thomas Sangild; Seco, Joao

    2014-11-01

    Purpose: Accurately predicting the range of radiotherapy ions in vivo is important for the precise delivery of dose in particle therapy. Range uncertainty is currently the single largest contribution to the dose margins used in planning and leads to a higher dose to normal tissue. The use of ion CT has been proposed as a method to improve the range uncertainty and thereby reduce dose to normal tissue of the patient. A wide variety of ions have been proposed and studied for this purpose, but no studies evaluate the image quality obtained with different ions in a consistent manner. However, imaging doses ion CT is a concern which may limit the obtainable image quality. In addition, the imaging doses reported have not been directly comparable with x-ray CT doses due to the different biological impacts of ion radiation. The purpose of this work is to develop a robust methodology for comparing the image quality of ion CT with respect to particle therapy, taking into account different reconstruction methods and ion species. Methods: A comparison of different ions and energies was made. Ion CT projections were simulated for five different scenarios: Protons at 230 and 330 MeV, helium ions at 230 MeV/u, and carbon ions at 430 MeV/u. Maps of the water equivalent stopping power were reconstructed using a weighted least squares method. The dose was evaluated via a quality factor weighted CT dose index called the CT dose equivalent index (CTDEI). Spatial resolution was measured by the modulation transfer function. This was done by a noise-robust fit to the edge spread function. Second, the image quality as a function of the number of scanning angles was evaluated for protons at 230 MeV. In the resolution study, the CTDEI was fixed to 10 mSv, similar to a typical x-ray CT scan. Finally, scans at a range of CTDEI’s were done, to evaluate dose influence on reconstruction error. Results: All ions yielded accurate stopping power estimates, none of which were statistically

  13. Hygienic support of the ISS air quality (main achievements and prospects)

    NASA Astrophysics Data System (ADS)

    Moukhamedieva, Lana; Tsarkov, Dmitriy; Pakhomova, Anna

    Hygienic preventive measures during pre-flight processing of manned spaceships, selection of polymeric materials, sanitary-hygienic evaluation of cargo and scientific hardware to be used on the ISS and life support systems allow to maintain air quality in limits of regulatory requirements. However, graduate increase of total air contamination by harmful chemicals is observed as service life of the ISS gets longer. It is caused by polymeric materials used on the station overall quantity rise, by additional contamination brought by cargo spacecrafts and modules docking to the ISS and by the cargo. At the same time the range of contaminants that are typical for off-gassing from polymeric materials where modern stabilizers, plasticizers, flame retarders and other additives are used gets wider. In resolving the matters of the ISS service life extension the main question of hygienic researches is to determine real safe operation life of the polymeric material used in structures and hardware of the station, including: begin{itemize} research of polymers degradation (ageing) and its effect on intensity of off gassing and its toxicity; begin{itemize} introduction of polymers with minimal volatile organic compounds off gassing under conditions of space flight and thermal-oxidative degradation. In order to ensure human safety during long-term flight it is important to develop: begin{itemize} real-time air quality monitoring systems, including on-line analysis of highly toxic contaminants evolving during thermo-oxidative degradation of polymer materials and during blowouts of toxic contaminants; begin{itemize} hygienic standards of contaminants level for extended duration of flight up to 3 years. It is essential to develop an automated control system for on-line monitoring of toxicological status and to develop hygienic and engineer measures of its management in order to ensure crew members safety during off-nominal situation.

  14. Generalization Evaluation of Machine Learning Numerical Observers for Image Quality Assessment

    PubMed Central

    Kalayeh, Mahdi M.; Marin, Thibault; Brankov, Jovan G.

    2014-01-01

    In this paper, we present two new numerical observers (NO) based on machine learning for image quality assessment. The proposed NOs aim to predict human observer performance in a cardiac perfusion-defect detection task for single-photon emission computed tomography (SPECT) images. Human observer (HumO) studies are now considered to be the gold standard for task-based evaluation of medical images. However such studies are impractical for use in early stages of development for imaging devices and algorithms, because they require extensive involvement of trained human observers who must evaluate a large number of images. To address this problem, numerical observers (also called model observers) have been developed as a surrogate for human observers. The channelized Hotelling observer (CHO), with or without internal noise model, is currently the most widely used NO of this kind. In our previous work we argued that development of a NO model to predict human observers' performance can be viewed as a machine learning (or system identification) problem. This consideration led us to develop a channelized support vector machine (CSVM) observer, a kernel-based regression model that greatly outperformed the popular and widely used CHO. This was especially evident when the numerical observers were evaluated in terms of generalization performance. To evaluate generalization we used a typical situation for the practical use of a numerical observer: after optimizing the NO (which for a CHO might consist of adjusting the internal noise model) based upon a broad set of reconstructed images, we tested it on a broad (but different) set of images obtained by a different reconstruction method. In this manuscript we aim to evaluate two new regression models that achieve accuracy higher than the CHO and comparable to our earlier CSVM method, while dramatically reducing model complexity and computation time. The new models are defined in a Bayesian machine-learning framework: a channelized

  15. Evaluation of the effects of sagging shifts on isocenter accuracy and image quality of cone-beam CT from kV on-board imagers.

    PubMed

    Ali, Imad; Ahmad, Salahuddin

    2009-07-17

    To investigate the effects of sagging shifts of three on-board kV imaging systems (OBI) on the isocenter positioning accuracy and image quality of cone-beam CT (CBCT). A cubical phantom having a metal marker in the center that can be aligned with the radiation isocenter was used to measure sagging shifts and their variation with gantry angle on three Varian linacs with kV on-board imaging systems. A marker-tracking algorithm was applied to detect the shadow of the metal marker and localize its center in the two-dimensional cone-beam radiographic projections. This tracking algorithm is based on finding the position of maximum cross-correlation between a region-of-interest from a template image (including the metal marker) and the projections containing the shadow of the metal marker. Sagging shifts were corrected by mapping the center of the metal marker to a reference position for all projections acquired over a full gantry rotation (0-360 degrees). The sag-corrected radiographic projections were then used to reconstruct CBCT using Feldkamp back-projection. A standard quality assurance phantom was used to evaluate the image quality of CBCT before and after sagging correction. Sagging affects both the positioning accuracy of the OBI isocenter and the CBCT image quality. For example, on one linac, the position of the marker on the cone-beam radiographic projections depends on the angular view and has maximal shifts of about 2 mm along the imager x-direction (patient's cross-plane). Sagging produces systematic shifts of the OBI isocenter as large as 1 mm posterior and 1 mm left in patient coordinates relative to the radiation isocenter. Further, it causes spatial distortion and blurring in CBCT image reconstructed from radiographic projections that are not corrected for OBI sagging. CBCT numbers vary by about 1% in full-fan scans and up to 3.5% in half-fan scans because of sagging. In order to achieve better localization accuracy in image-guided radiation therapy

  16. Image quality of digital radiography using flat detector technology

    NASA Astrophysics Data System (ADS)

    Ducourant, Thierry; Couder, David; Wirth, Thibaut; Trochet, J. C.; Bastiaens, Raoul J. M.; Bruijns, Tom J. C.; Luijendijk, Hans A.; Sandkamp, Bernhard; Davies, Andrew G.; Didier, Dominique; Gonzalez, Agustin; Terraz, Sylvain; Ruefenacht, Daniel

    2003-06-01

    One of the most demanding applications in dynamic X-Ray imaging is Digital Subtraction Angiography (DSA). As opposed to other applications such as Radiography or Fluoroscopy, there has been so far limited attempts to introduce DSA with flat detector (FD) technology: Up to now, only part of the very demanding requirements could be taken into account. In order to enable an introduction of FD technology also in this area, a complete understanding of all physical phenomena related to the use of this technology in DSA is necessary. This knowledge can be used for detector design and performance optimization. Areas of research include fast switching between several detector operating modes (e.g. switching between fluoroscopy and high dose exposure modes and vice versa) and non stability during the DSA run e.g. due to differences in gain between subsequent images. Furthermore, effects of local and global X-Ray overexposure (due to direct radiation), which can cause temporal artifacts such as ghosting, may have a negative impact on the image quality. Pixel shift operations and image subtraction enhance the visibility of any artifact. The use of a refresh light plays an important role in the optimization process. Both an 18x18 cm2 as well as a large area 30x40 cm2 flat panel detector are used for studying the various phenomena. Technical measurements were obtained using complex imaging sequences representing the most demanding application conditions. Studies on subtraction test objects were performed and vascular applications have been carried out in order to confirm earlier findings. The basis for comparison of DSA is, still, the existing and mature IITV technology. The results of this investigation show that the latest generation of dynamic flat detectors is capable of handling this kind of demanding application. Not only the risk areas and their solutions and points of attention will be addressed, but also the benefits of present FD technology with respect to state

  17. A clinical evaluation of the image quality computer program, CoCIQ.

    PubMed

    Norrman, E; Gårdestig, M; Persliden, J; Geijer, H

    2005-06-01

    To provide an objective way of measuring image quality, a computer program was designed that automatically analyzes the test images of a contrast-detail (CD) phantom. The program gives a quantified measurement of image quality by calculating an Image Quality Figure (IQF). The aim of this work was to evaluate the program and adjust it to clinical situations in order to find the detectable level where the program gives a reliable figure of the contrast resolution. The program was applied on a large variety of images with lumbar spine and urographic parameters, from very low to very high image qualities. It was shown that the computer program produces IQFs with small variations and there were a strong linear statistical relation between the computerized evaluation and the evaluation performed by human observers (R2= 0.98). This method offers a fast and easy way of conducting image quality evaluations.

  18. Improved structural similarity metric for the visible quality measurement of images

    NASA Astrophysics Data System (ADS)

    Lee, Daeho; Lim, Sungsoo

    2016-11-01

    The visible quality assessment of images is important to evaluate the performance of image processing methods such as image correction, compressing, and enhancement. The structural similarity is widely used to determine the visible quality; however, existing structural similarity metrics cannot correctly assess the perceived human visibility of images that have been slightly geometrically transformed or images that have undergone significant regional distortion. We propose an improved structural similarity metric that is more close to human visible evaluation. Compared with the existing metrics, the proposed method can more correctly evaluate the similarity between an original image and various distorted images.

  19. Crowdsourcing quality control for Dark Energy Survey images

    NASA Astrophysics Data System (ADS)

    Melchior, P.; Sheldon, E.; Drlica-Wagner, A.; Rykoff, E. S.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Doel, P.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Jarvis, M.; Kuehn, K.; Li, T. S.; Maia, M. A. G.; March, M.; Marshall, J. L.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Wester, W.; Zhang, Y.

    2016-07-01

    We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo version at http://des-exp-checker.pmelchior.net.

  20. Crowdsourcing quality control for Dark Energy Survey images

    DOE PAGES

    Melchior, P.

    2016-07-01

    We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomersmore » but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net« less

  1. Crowdsourcing quality control for Dark Energy Survey images

    SciTech Connect

    Melchior, P.

    2016-07-01

    We have developed a crowdsourcing web application for image quality control employed by the Dark Energy Survey. Dubbed the "DES exposure checker", it renders science-grade images directly to a web browser and allows users to mark problematic features from a set of predefined classes. Users can also generate custom labels and thus help identify previously unknown problem classes. User reports are fed back to hardware and software experts to help mitigate and eliminate recognized issues. We report on the implementation of the application and our experience with its over 100 users, the majority of which are professional or prospective astronomers but not data management experts. We discuss aspects of user training and engagement, and demonstrate how problem reports have been pivotal to rapidly correct artifacts which would likely have been too subtle or infrequent to be recognized otherwise. We conclude with a number of important lessons learned, suggest possible improvements, and recommend this collective exploratory approach for future astronomical surveys or other extensive data sets with a sufficiently large user base. We also release open-source code of the web application and host an online demo versionat http://des-exp-checker.pmelchior.net

  2. Effects of interactions among wave aberrations on optical image quality.

    PubMed

    McLellan, J S; Prieto, P M; Marcos, S; Burns, S A

    2006-09-01

    Wave aberrations degrade the optical quality of the eye relative to the diffraction limit, but there are situations in which having slightly aberrated optics can provide some relative visual benefits. This fact led us to consider whether interactions among aberrations in the eye's wavefront produce an advantage for image quality relative to wavefronts with randomized combinations of aberrations with the same total RMS error. Total ocular wave aberrations from two experimental groups and corneal wave aberrations from one group were measured and expressed as Zernike polynomial expansions through the seventh-order. In a series of Monte Carlo simulations, modulation transfer functions (MTFs) for the measured wave aberrations were compared to distributions of artificial MTFs for wavefronts created by randomizing the sign or orientation of the aberrations, while maintaining the RMS error within each Zernike order. In a control condition, "synthetic" model eyes were produced by choosing each individual aberration term at random from individuals in the experimental group, and again MTFs were compared for original and randomized signs. Results were summarized by the MTF ratio: real MTF/mean simulated MTF, as a function of spatial frequency. For a 6mm pupil, the mean MTF ratio for total ocular aberrations was greater than 1.0 up to 60 cycles per degree, suggesting that the eye's aberrations are not independent and that there may be a positive functional consequences to their interrelations. This positive relation did not hold for corneal aberrations alone, or for the synthetic eyes.

  3. Imaging-based logics for ornamental stone quality chart definition

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Gargiulo, Aldo; Serranti, Silvia; Raspi, Costantino

    2007-02-01

    Ornamental stone products are commercially classified on the market according to several factors related both to intrinsic lythologic characteristics and to their visible pictorial attributes. Sometimes these latter aspects prevail in quality criteria definition and assessment. Pictorial attributes are in any case also influenced by the performed working actions and the utilized tools selected to realize the final stone manufactured product. Stone surface finishing is a critical task because it can contribute to enhance certain aesthetic features of the stone itself. The study was addressed to develop an innovative set of methodologies and techniques able to quantify the aesthetic quality level of stone products taking into account both the physical and the aesthetical characteristics of the stones. In particular, the degree of polishing of the stone surfaces and the presence of defects have been evaluated, applying digital image processing strategies. Morphological and color parameters have been extracted developing specific software architectures. Results showed as the proposed approaches allow to quantify the degree of polishing and to identify surface defects related to the intrinsic characteristics of the stone and/or the performed working actions.

  4. Working and learning together: good quality care depends on it, but how can we achieve it?

    PubMed Central

    McPherson, K; Headrick, L; Moss, F

    2001-01-01

    Educating healthcare professionals is a key issue in the provision of quality healthcare services, and interprofessional education (IPE) has been proposed as a means of meeting this challenge. Evidence that collaborative working can be essential for good clinical outcomes underpins the real need to find out how best to develop a work force that can work together effectively. We identify barriers to mounting successful IPE programmes, report on recent educational initiatives that have aimed to develop collaborative working, and discuss the lessons learned. To develop education strategies that really prepare learners to collaborate we must: agree on the goals of IPE, identify effective methods of delivery, establish what should be learned when, attend to the needs of educators and clinicians regarding their own competence in interprofessional work, and advance our knowledge by robust evaluation using both qualitative and quantitative approaches. We must ensure that our education strategies allow students to recognise, value, and engage with the difference arising from the practice of a range of health professionals. This means tackling some long held assumptions about education and identifying where it fosters norms and attitudes that interfere with collaboration or fails to engender interprofessional knowledge and skill. We need to work together to establish education strategies that enhance collaborative working along with profession specific skills to produce a highly skilled, proactive, and respectful work force focused on providing safe and effective health for patients and communities. Key Words: interprofessional education; multiprofessional learning; teamwork PMID:11700379

  5. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  6. Premigration School Quality, Time Spent in the United States, and the Math Achievement of Immigrant High School Students.

    PubMed

    Bozick, Robert; Malchiodi, Alessandro; Miller, Trey

    2016-10-01

    Using a nationally representative sample of 1,189 immigrant youth in American high schools, we examine whether the quality of education in their country of origin is related to post-migration math achievement in the 9th grade. To measure the quality of their education in the country of origin, we use country-specific average test scores from two international assessments: the Programme for International Student Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS). We find that the average PISA or TIMSS scores for immigrant youth's country of origin are positively associated with their performance on the 9th grade post-migration math assessment. We also find that each year spent in the United States is positively associated with performance on the 9th grade post-migration math assessment, but this effect is strongest for immigrants from countries with low PISA/TIMSS scores.

  7. Quality Imaging — Comparison of CR Mammography with Screen-Film Mammography

    NASA Astrophysics Data System (ADS)

    Gaona, E.; Azorín Nieto, J.; Irán Díaz Góngora, J. A.; Arreola, M.; Casian Castellanos, G.; Perdigón Castañeda, G. M.; Franco Enríquez, J. G.

    2006-09-01

    The aim of this work is a quality imaging comparison of CR mammography images printed to film by a laser printer with screen-film mammography. A Giotto and Elscintec dedicated mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in screen-film mammography. Four CR mammography units from two different manufacturers and three dedicated x-ray mammography units with fully automatic exposure and a nominal large focal spot size of 0.3 mm were used for the image acquisition of phantoms in CR mammography. The tests quality image included an assessment of system resolution, scoring phantom images, Artifacts, mean optical density and density difference (contrast). In this study, screen-film mammography with a quality control program offers a significantly greater level of quality image relative to CR mammography images printed on film.

  8. Application of Multispectral Imaging to Determine Quality Attributes and Ripeness Stage in Strawberry Fruit

    PubMed Central

    Liu, Changhong; Liu, Wei; Lu, Xuzhong; Ma, Fei; Chen, Wei; Yang, Jianbo; Zheng, Lei

    2014-01-01

    Multispectral imaging with 19 wavelengths in the range of 405–970 nm has been evaluated for nondestructive determination of firmness, total soluble solids (TSS) content and ripeness stage in strawberry fruit. Several analysis approaches, including partial least squares (PLS), support vector machine (SVM) and back propagation neural network (BPNN), were applied to develop theoretical models for predicting the firmness and TSS of intact strawberry fruit. Compared with PLS and SVM, BPNN considerably improved the performance of multispectral imaging for predicting firmness and total soluble solids content with the correlation coefficient (r) of 0.94 and 0.83, SEP of 0.375 and 0.573, and bias of 0.035 and 0.056, respectively. Subsequently, the ability of multispectral imaging technology to classify fruit based on ripeness stage was tested using SVM and principal component analysis-back propagation neural network (PCA-BPNN) models. The higher classification accuracy of 100% was achieved using SVM model. Moreover, the results of all these models demonstrated that the VIS parts of the spectra were the main contributor to the determination of firmness, TSS content estimation and classification of ripeness stage in strawberry fruit. These results suggest that multispectral imaging, together with suitable analysis model, is a promising technology for rapid estimation of quality attributes and classification of ripeness stage in strawberry fruit. PMID:24505317

  9. Application of multispectral imaging to determine quality attributes and ripeness stage in strawberry fruit.

    PubMed

    Liu, Changhong; Liu, Wei; Lu, Xuzhong; Ma, Fei; Chen, Wei; Yang, Jianbo; Zheng, Lei

    2014-01-01

    Multispectral imaging with 19 wavelengths in the range of 405-970 nm has been evaluated for nondestructive determination of firmness, total soluble solids (TSS) content and ripeness stage in strawberry fruit. Several analysis approaches, including partial least squares (PLS), support vector machine (SVM) and back propagation neural network (BPNN), were applied to develop theoretical models for predicting the firmness and TSS of intact strawberry fruit. Compared with PLS and SVM, BPNN considerably improved the performance of multispectral imaging for predicting firmness and total soluble solids content with the correlation coefficient (r) of 0.94 and 0.83, SEP of 0.375 and 0.573, and bias of 0.035 and 0.056, respectively. Subsequently, the ability of multispectral imaging technology to classify fruit based on ripeness stage was tested using SVM and principal component analysis-back propagation neural network (PCA-BPNN) models. The higher classification accuracy of 100% was achieved using SVM model. Moreover, the results of all these models demonstrated that the VIS parts of the spectra were the main contributor to the determination of firmness, TSS content estimation and classification of ripeness stage in strawberry fruit. These results suggest that multispectral imaging, together with suitable analysis model, is a promising technology for rapid estimation of quality attributes and classification of ripeness stage in strawberry fruit.

  10. The National COSEE Network's decade of assisting scientists to achieve high-quality Broader Impacts

    NASA Astrophysics Data System (ADS)

    Hotaling, L. A.; Yoder, J. A.; Scowcroft, G.

    2012-12-01

    Many ocean scientists struggle with defining Broader Impact (BI) activities that will satisfy reviewers or fit within budget and time constraints, and many scientists are uncertain as to how to find assistance in crafting sound BI plans. In 2002, the National Centers for Ocean Sciences Education Excellence (COSEE) Network began engaging and connecting scientists and educators to transform ocean sciences education. COSEE's success in engaging scientists in BI activities is due to the Network's ability to find and create opportunities for education and outreach, assist scientists in designing programs that feature their research, and support scientists with courses, workshops and tools, which assist them in becoming better communicators of their research to non-scientific audiences. Among its most significant accomplishments to date is the development of a network of ocean scientists that is connected to education and outreach professionals, formal K-12 educators and students, informal science professionals, learning sciences experts, and graduate and undergraduate students. In addition to networking, COSEE Centers have developed and implemented the Ocean Literacy Principles and Fundamental Concepts and the Ocean Literacy Scope and Sequence for grades K-12. COSEE has also helped engage scientists with public audiences, facilitating the use of real-time ocean observing systems (OOS) data in formal and informal education settings, creating new distance learning and online resources for ocean sciences education, and promoting high quality ocean sciences education and outreach in universities and formal/informal venues. The purpose of this presentation is to review several tools that the COSEE Network has developed to assist ocean scientists with BI activities and to describe the Network's efforts to prepare young scientists to communicate their research to non-expert audiences.

  11. Achieving high-resolution in flat-panel imagers for digital radiography

    NASA Astrophysics Data System (ADS)

    Rahn, Jeffrey T.; Lemmi, Francesco; Lu, Jeng-Ping; Mei, Ping; Street, Robert A.; Ready, Steve E.; Ho, Jackson; Apte, Raj B.; Van Schuylenbergh, Koenraad; Lau, Rachel; Weisfield, Richard L.; Lujan, Rene; Boyce, James B.

    1999-10-01

    Amorphous silicon (a-Si:H) matrix-addressed imager sensors are the leading new technology for digital medical x-ray imaging. Large-area systems are now commercially available with good resolution and large dynamic range. These systems image x-rays either by detecting light emission from a phosphor screen onto an a-Si:H photodiode, or by collecting ionization charge in a thick x-ray absorbing photoconductor with as selenium, and both approaches have been widely discussed in the literature. While these systems meet the performance needs for general radiographic imaging, further improvements in sensitivity, noise and resolution are needed to fully satisfy the requirements for fluoroscopy and mammography. The approach taken for this paper uses indirect detection, with a phosphor layer for x-ray conversion. The thin a-Si:H photodiode layer for detects the scintillation light. In contrast with the present generation of devices, which have a mesa-isolated sensor at each pixel, these imagers use a continuous sensor covering the entire front surface of the array. The p+ and i layers of a-Si:H are continuous, while the n+ contact has been patterned to isolate adjacent pixels. The continuous photodiode layer maximizes light absorption from the phosphor and provides high x-ray conversion efficiency.

  12. Current achievements of nanoparticle applications in developing optical sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Choi, Jong-ryul; Shin, Dong-Myeong; Song, Hyerin; Lee, Donghoon; Kim, Kyujung

    2016-11-01

    Metallic nanostructures have recently been demonstrated to improve the performance of optical sensing and imaging techniques due to their remarkable localization capability of electromagnetic fields. Particularly, the zero-dimensional nanostructure, commonly called a nanoparticle, is a promising component for optical measurement systems due to its attractive features, e.g., ease of fabrication, capability of surface modification and relatively high biocompatibility. This review summarizes the work to date on metallic nanoparticles for optical sensing and imaging applications, starting with the theoretical backgrounds of plasmonic effects in nanoparticles and moving through the applications in Raman spectroscopy and fluorescence biosensors. Various efforts for enhancing the sensitivity, selectivity and biocompatibility are summarized, and the future outlooks for this field are discussed. Convergent studies in optical sensing and imaging have been emerging field for the development of medical applications, including clinical diagnosis and therapeutic applications.

  13. Quantitative and qualitative image quality analysis of super resolution images from a low cost scanning laser ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Echegaray, Sebastian; Zamora, Gilberto; Soliz, Peter; Bauman, Wendall

    2011-03-01

    The lurking epidemic of eye diseases caused by diabetes and aging will put more than 130 million Americans at risk of blindness by 2020. Screening has been touted as a means to prevent blindness by identifying those individuals at risk. However, the cost of most of today's commercial retinal imaging devices makes their use economically impractical for mass screening. Thus, low cost devices are needed. With these devices, low cost often comes at the expense of image quality with high levels of noise and distortion hindering the clinical evaluation of those retinas. A software-based super resolution (SR) reconstruction methodology that produces images with improved resolution and quality from multiple low resolution (LR) observations is introduced. The LR images are taken with a low-cost Scanning Laser Ophthalmoscope (SLO). The non-redundant information of these LR images is combined to produce a single image in an implementation that also removes noise and imaging distortions while preserving fine blood vessels and small lesions. The feasibility of using the resulting SR images for screening of eye diseases was tested using quantitative and qualitative assessments. Qualitatively, expert image readers evaluated their ability of detecting clinically significant features on the SR images and compared their findings with those obtained from matching images of the same eyes taken with commercially available high-end cameras. Quantitatively, measures of image quality were calculated from SR images and compared to subject-matched images from a commercial fundus imager. Our results show that the SR images have indeed enough quality and spatial detail for screening purposes.

  14. Ray tracing analysis of the image quality of a high collection efficiency mirror system.

    PubMed

    Seitzinger, N K; Martin, J C; Keller, R A

    1990-10-01

    Recently, a high collection efficiency mirror system was developed by Watson [Cytometry 10, 681-688 (1989)] to increase the sensitivity of low level fluorescence detection. The mirror system consists of an ellipsoidal imaging mirror and spherical backreflecting mirror. The fluorescing sample is located at one focus of the ellipsoid, and its image is formed at the other focus. In this paper we evaluate the image quality of this geometry using a PC-based ray tracing program. The analysis demonstrates high collection efficiency but poor image quality. The effect of poor image quality on single molecule detection is discussed.

  15. Ray tracing analysis of the image quality of a high collection efficiency mirror system

    SciTech Connect

    Seitzinger, N.K.; Martin, J.C.; Keller, R.A. )

    1990-10-01

    Recently, a high collection efficiency mirror system was developed by Watson (Cytometry, {bold 10}, 681--688 (1989)) to increase the sensitivity of low level fluorescence detection. The mirror system consists of an ellipsoidal imaging mirror and spherical backreflecting mirror. The fluorescing sample is located at one focus of the ellipsoid, and its image is formed at the other focus. In this paper we evaluate the image quality of this geometry using a PC-based ray tracing program. The analysis demonstrates high collection efficiency but poor image quality. The effect of poor image quality on single molecule detection is discussed. Keywords: Fluorescence, ellipsoidal mirror, spherical mirror, single molecule detection, flow cytometry.

  16. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    SciTech Connect

    Wang, Qi; Wang, Junting; Lu, Qingyou; Hou, Yubin

    2013-11-15

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d{sub 31} coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  17. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hou, Yubin; Wang, Junting; Lu, Qingyou

    2013-11-01

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d31 coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  18. SU-C-304-04: A Compact Modular Computational Platform for Automated On-Board Imager Quality Assurance

    SciTech Connect

    Dolly, S; Cai, B; Chen, H; Anastasio, M; Sun, B; Yaddanapudi, S; Noel, C; Goddu, S; Mutic, S; Li, H; Tan, J

    2015-06-15

    Purpose: Traditionally, the assessment of X-ray tube output and detector positioning accuracy of on-board imagers (OBI) has been performed manually and subjectively with rulers and dosimeters, and typically takes hours to complete. In this study, we have designed a compact modular computational platform to automatically analyze OBI images acquired with in-house designed phantoms as an efficient and robust surrogate. Methods: The platform was developed as an integrated and automated image analysis-based platform using MATLAB for easy modification and maintenance. Given a set of images acquired with the in-house designed phantoms, the X-ray output accuracy was examined via cross-validation of the uniqueness and integration minimization of important image quality assessment metrics, while machine geometric and positioning accuracy were validated by utilizing pattern-recognition based image analysis techniques. Results: The platform input was a set of images of an in-house designed phantom. The total processing time is about 1–2 minutes. Based on the data acquired from three Varian Truebeam machines over the course of 3 months, the designed test validation strategy achieved higher accuracy than traditional methods. The kVp output accuracy can be verified within +/−2 kVp, the exposure accuracy within 2%, and exposure linearity with a coefficient of variation (CV) of 0.1. Sub-millimeter position accuracy was achieved for the lateral and longitudinal positioning tests, while vertical positioning accuracy within +/−2 mm was achieved. Conclusion: This new platform delivers to the radiotherapy field an automated, efficient, and stable image analysis-based procedure, for the first time, acting as a surrogate for traditional tests for LINAC OBI systems. It has great potential to facilitate OBI quality assurance (QA) with the assistance of advanced image processing techniques. In addition, it provides flexible integration of additional tests for expediting other OBI

  19. Dosimetric and image quality assessment of different acquisition protocols of a novel 64-slice CT scanner

    NASA Astrophysics Data System (ADS)

    Vite, Cristina; Mangini, Monica; Strocchi, Sabina; Novario, Raffaele; Tanzi, Fabio; Carrafiello, Gianpaolo; Conte, Leopoldo; Fugazzola, Carlo

    2006-03-01

    Dose and image quality assessment in computed tomography (CT) are almost affected by the vast variety of CT scanners (axial CT, spiral CT, low-multislice CT (2-16), high-multislice CT (32-64)) and imaging protocols in use. Very poor information is at the moment available on 64 slices CT scanners. Aim of this work is to assess image quality related to patient dose indexes and to investigate the achievable dose reduction for a commercially available 64 slices CT scanner. CT dose indexes (weighted computed tomography dose index, CTDI w and Dose Length Product, DLP) were measured with a standard CT phantom for the main protocols in use (head, chest, abdomen and pelvis) and compared with the values displayed by the scanner itself. The differences were always below 7%. All the indexes were below the Diagnostic Reference Levels defined by the European Council Directive 97/42. Effective doses were measured for each protocol with thermoluminescent dosimeters inserted in an anthropomorphic Alderson Rando phantom and compared with the same values computed by the ImPACT CT Patient Dosimetry Calculator software code and corrected by a factor taking in account the number of slices (from 16 to 64). The differences were always below 25%. The effective doses range from 1.5 mSv (head) to 21.8 mSv (abdomen). The dose reduction system of the scanner was assessed comparing the effective dose measured for a standard phantom-man (a cylinder phantom, 32 cm in diameter) to the mean dose evaluated on 46 patients. The standard phantom was considered as no dose reduction reference. The dose reduction factor range from 16% to 78% (mean of 46%) for all protocols, from 29% to 78% (mean of 55%) for chest protocol, from 16% to 76% (mean of 42%) for abdomen protocol. The possibility of a further dose reduction was investigated measuring image quality (spatial resolution, contrast and noise) as a function of CTDI w. This curve shows a quite flat trend decreasing the dose approximately to 90% and a

  20. Comparative Study of Achievable Quality Cutting Carbon Fibre Reinforced Thermoplastics Using Continuous Wave and Pulsed Laser Sources

    NASA Astrophysics Data System (ADS)

    Bluemel, S.; Jaeschke, P.; Suttmann, O.; Overmeyer, L.

    Laser cutting of CFRP lightweight parts has the advantages of a contact-free, automatable and flexible processing for a prospective series production. For the development of strategies for laser cutting of carbon fibre reinforced plastics (CFRP), different scientific approaches exist to achieve a process with small heat affected zones (HAZ), and high cutting rates. Within this paper a cw laser, a nanosecond and a picosecond laser source emitting in the near infrared range have been used in combination with a scanning system to cut CFRP with a thermoplastic matrix. The influence of the scanning speed on the size of the HAZ and the corresponding tensile strength were investigated for each laser source. Furthermore, the authors compared the achievable HAZ and the effective cutting speeds of the different setups in order to evaluate the efficiency and quality of the chosen strategies. The results show that a nanosecond pulsed laser source with high average power is a good trade-off between attainable quality and cutting rate.

  1. Improved ultrasonic TV images achieved by use of Lamb-wave orientation technique

    NASA Technical Reports Server (NTRS)

    Berger, H.

    1967-01-01

    Lamb-wave sample orientation technique minimizes the interference from standing waves in continuous wave ultrasonic television imaging techniques used with thin metallic samples. The sample under investigation is oriented such that the wave incident upon it is not normal, but slightly angled.

  2. Investigating perceptual qualities of static surface appearance using real materials and displayed images.

    PubMed

    Tanaka, Midori; Horiuchi, Takahiko

    2015-10-01

    Recent experimental evidence supports the idea that human observers are good at recognizing and categorizing materials. Fleming et al. reported that perceptual qualities and material classes are closely related using projected images (Journal of Vision 13(8) (2013) 9). In this paper, we further investigated their findings using real materials and degraded image versions of the same materials. We constructed a real material dataset, as well as four image datasets by varying chromaticity (color vs. gray) and resolution (high vs. low) of the material images. To investigate the fundamental properties of materials' static surface appearance, we used stimuli that lacked shape and saturated color information. We then investigated the relationship between these perceptual qualities and the various types of image representation through psychophysical experiments. Our results showed that the representation method of some materials affected their perceptual qualities. These cases could be classified into the following three types: (1) perceptual qualities decreased by reproducing the materials as images, (2) perceptual qualities decreased by creating gray images, and (3) perceptual qualities such as "Hardness" and "Coldness" tended to increase when the materials were reproduced as low-quality images. Through methods such as principal component analysis and k-means clustering, we found that material categories are more likely to be confused when materials are represented as images, especially gray images.

  3. High-quality image magnification applying Gerchberg-Papoulis iterative algorithm with discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Shinbori, Eiji; Takagi, Mikio

    1992-11-01

    A new image magnification method, called 'IM-GPDCT' (image magnification applying the Gerchberg-Papoulis (GP) iterative algorithm with discrete cosine transform (DCT)), is described and its performance evaluated. This method markedly improves image quality of a magnified image using a concept which restores the spatial high frequencies which are conventionally lost due to use of a low pass filter. These frequencies are restored using two known constraints applied during iterative DCT: (1) correct information in a passband is known and (2) the spatial extent of an image is finite. Simulation results show that the IM- GPDCT outperforms three conventional interpolation methods from both a restoration error and image quality standpoint.

  4. A Multivariate Model for Coastal Water Quality Mapping Using Satellite Remote Sensing Images

    PubMed Central

    Su, Yuan-Fong; Liou, Jun-Jih; Hou, Ju-Chen; Hung, Wei-Chun; Hsu, Shu-Mei; Lien, Yi-Ting; Su, Ming-Daw; Cheng, Ke-Sheng; Wang, Yeng-Fung

    2008-01-01

    This study demonstrates the feasibility of coastal water quality mapping using satellite remote sensing images. Water quality sampling campaigns were conducted over a coastal area in northern Taiwan for measurements of three water quality variables including Secchi disk depth, turbidity, and total suspended solids. SPOT satellite images nearly concurrent with the water quality sampling campaigns were also acquired. A spectral reflectance estimation scheme proposed in this study was applied to SPOT multispectral images for estimation of the sea surface reflectance. Two models, univariate and multivariate, for water quality estimation using the sea surface reflectance derived from SPOT images were established. The multivariate model takes into consideration the wavelength-dependent combined effect of individual seawater constituents on the sea surface reflectance and is superior over the univariate model. Finally, quantitative coastal water quality mapping was accomplished by substituting the pixel-specific spectral reflectance into the multivariate water quality estimation model. PMID:27873872

  5. A Multivariate Model for Coastal Water Quality Mapping Using Satellite Remote Sensing Images.

    PubMed

    Su, Yuan-Fong; Liou, Jun-Jih; Hou, Ju-Chen; Hung, Wei-Chun; Hsu, Shu-Mei; Lien, Yi-Ting; Su, Ming-Daw; Cheng, Ke-Sheng; Wang, Yeng-Fung

    2008-10-10

    his study demonstrates the feasibility of coastal water quality mapping using satellite remote sensing images. Water quality sampling campaigns were conducted over a coastal area in northern Taiwan for measurements of three water quality variables including Secchi disk depth, turbidity, and total suspended solids. SPOT satellite images nearly concurrent with the water quality sampling campaigns were also acquired. A spectral reflectance estimation scheme proposed in this study was applied to SPOT multispectral images for estimation of the sea surface reflectance. Two models, univariate and multivariate, for water quality estimation using the sea surface reflectance derived from SPOT images were established. The multivariate model takes into consideration the wavelength-dependent combined effect of individual seawater constituents on the sea surface reflectance and is superior over the univariate model. Finally, quantitative coastal water quality mapping was accomplished by substituting the pixel-specific spectral reflectance into the multivariate water quality estimation model.

  6. Digital breast tomosynthesis: Dose and image quality assessment.

    PubMed

    Maldera, A; De Marco, P; Colombo, P E; Origgi, D; Torresin, A

    2017-01-01

    The aim of this work was to evaluate how different acquisition geometries and reconstruction parameters affect the performance of four digital breast tomosynthesis (DBT) systems (Senographe Essential - GE, Mammomat Inspiration - Siemens, Selenia Dimensions - Hologic and Amulet Innovality - Fujifilm) on the basis of a physical characterization. Average Glandular Dose (AGD) and image quality parameters such as in-plane/in-depth resolution, signal difference to noise ratio (SDNR) and artefact spread function (ASF) were examined. Measured AGD values resulted below EUREF limits for 2D imaging. A large variability was recorded among the investigated systems: the mean dose ratio DBT/2D ranged between 1.1 and 1.9. In-plane resolution was in the range: 2.2mm(-1)-3.8mm(-1) in chest wall-nipple direction. A worse resolution was found for all devices in tube travel direction. In-depth resolution improved with increasing scan angle but was also affected by the choice of reconstruction and post-processing algorithms. The highest z-resolution was provided by Siemens (50°, FWHM=2.3mm) followed by GE (25°, FWHM=2.8mm), while the Fujifilm HR showed the lowest one, despite its wide scan angle (40°, FWHM=4.1mm). The ASF was dependent on scan angle: smaller range systems showed wider ASF curves; however a clear relationship was not found between scan angle and ASF, due to the different post processing and reconstruction algorithms. SDNR analysis, performed on Fujifilm system, demonstrated that pixel binning improves detectability for a fixed dose/projection. In conclusion, we provide a performance comparison among four DBT systems under a clinical acquisition mode.

  7. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    NASA Astrophysics Data System (ADS)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  8. Noncontact photoacoustic imaging achieved by using a low-coherence interferometer as the acoustic detector.

    PubMed

    Wang, Yi; Li, Chunhui; Wang, Ruikang K

    2011-10-15

    We report on a noncontact photoacoustic imaging (PAI) technique in which a low-coherence interferometer [(LCI), optical coherence tomography (OCT) hardware] is utilized as the acoustic detector. A synchronization approach is used to lock the LCI system at its highly sensitive region for photoacoustic detection. The technique is experimentally verified by the imaging of a scattering phantom embedded with hairs and the blood vessels within a mouse ear in vitro. The system's axial and lateral resolutions are evaluated at 60 and 30 μm, respectively. The experimental results indicate that PAI in a noncontact detection mode is possible with high resolution and high bandwidth. The proposed approach lends itself to a natural integration of PAI with OCT, rather than a combination of two separate and independent systems.

  9. IR image quality assessment and real-time optimum seeking method based on dynamic visual characteristics

    NASA Astrophysics Data System (ADS)

    Li, Bin; Liu, Gang; Gao, Yongmin; Lei, Hao; Wu, Haiying; Wang, Yu; Rong, Xiaolong

    2016-10-01

    Image quality is an important factor that influences the dynamic target information perception; it is the key factor of real-time target state analysis and judgment. In order to solve the multi-observation station comparison and video optimum seeking problem in the process of target information perception and recognition, an image quality assessment method based on visual characteristics is proposed for infrared target tracking. First, it analyses the basic infrared target image characteristics and application requirements, analyses the status and problems of the multi station optimum seeking technology. According to the expected research results, the processing flow of image processing is established. Then, the image quality objective assessment index is established, which reflects the basic characteristics of the target image, and the assessment index is integrated into the normalized assessment function. According to the quality assessment function, the infrared image quality assessment based on infrared target recognition and image analysis processing is realized, which is mainly characterized by the region of interest and dynamic visual characteristics. And on the basis of this technology, the real-time optimum seeking of multi station infrared target tracking image is completed. In order to verify the effectiveness of the method and the practical application effect, it designs the quality assessment and comparison of different station infrared images. Example shows that the method proposed in this paper can realize multi-observation station infrared image assessment comparison, image quality sorting, the optimum seeking of the infrared image based on the quality assessment. The results accord with the characteristics of infrared target image and dynamic visual characteristics.

  10. Similar Reference Image Quality Assessment: A New Database and A Trial with Local Feature Matching

    NASA Astrophysics Data System (ADS)

    Lu, Qingbo; Zhou, Wengang; Li, Houqiang

    2016-12-01

    Conventionally, the reference image for image quality assessment (IQA) is completely available (full-reference IQA) or unavailable (no-reference IQA). Even for reduced-reference IQA, the features that are used to predict image quality are still extracted from the pristine reference image. However, the pristine reference image is always unavailable in many real scenarios. In contrast, it is convenient to obtain a number of similar reference images via retrieval from the Internet. These similar reference images may share similar contents and scenes with the image to be assessed. In this paper, we attempt to discuss the image quality assessment problem from the view of similar images, i.e. similar reference IQA. Although the similar reference images share similar contents with the degraded image, the difference between them still cannot be ignored. Therefore, we propose an IQA framework based on local feature matching, which can help to identify the similar regions and structures. Then the IQA features are computed only from these similar regions to predict the final image quality score. Besides, since there is no IQA databases for the similar reference IQA problem, we establish a novel IQA database that consists of 272 images from four scenes. The experiments demonstrate that the performance of our scheme goes beyond state-of-the-art no-reference IQA methods and some full-reference IQA algorithms.

  11. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging

    PubMed Central

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  12. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-06-14

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models.

  13. Image reconstruction and image quality evaluation for a dual source CT scanner

    PubMed Central

    Flohr, T. G.; Bruder, H.; Stierstorfer, K.; Petersilka, M.; Schmidt, B.; McCollough, C. H.

    2008-01-01

    The authors present and evaluate concepts for image reconstruction in dual source CT (DSCT). They describe both standard spiral (helical) DSCT image reconstruction and electrocardiogram (ECG)-synchronized image reconstruction. For a compact mechanical design of the DSCT, one detector (A) can cover the full scan field of view, while the other detector (B) has to be restricted to a smaller, central field of view. The authors develop an algorithm for scan data completion, extrapolating truncated data of detector (B) by using data of detector (A). They propose a unified framework for convolution and simultaneous 3D backprojection of both (A) and (B) data, with similar treatment of standard spiral, ECG-gated spiral, and sequential (axial) scan data. In ECG-synchronized image reconstruction, a flexible scan data range per measurement system can be used to trade off temporal resolution for reduced image noise. Both data extrapolation and image reconstruction are evaluated by means of computer simulated data of anthropomorphic phantoms, by phantom measurements and patient studies. The authors show that a consistent filter direction along the spiral tangent on both detectors is essential to reduce cone-beam artifacts, requiring truncation of the extrapolated (B) data after convolution in standard spiral scans. Reconstructions of an anthropomorphic thorax phantom demonstrate good image quality and dose accumulation as theoretically expected for simultaneous 3D backprojection of the filtered (A) data and the truncated filtered (B) data into the same 3D image volume. In ECG-gated spiral modes, spiral slice sensitivity profiles (SSPs) show only minor dependence on the patient’s heart rate if the spiral pitch is properly adapted. Measurements with a thin gold plate phantom result in effective slice widths (full width at half maximum of the SSP) of 0.63–0.69mm for the nominal 0.6mm slice and 0.82–0.87mm for the nominal 0.75mm slice. The visually determined through-plane (z

  14. Image quality of flat-panel cone beam CT

    NASA Astrophysics Data System (ADS)

    Rose, Georg; Wiegert, Jens; Schaefer, Dirk; Fiedler, Klaus; Conrads, Norbert; Timmer, Jan; Rasche, Volker; Noordhoek, Niels; Klotz, Erhard; Koppe, Reiner

    2003-06-01

    We present results on 3D image quality in terms of spatial resolution (MTF) and low contrast detectability, obtained on a flat dynamic X-ray detector (FD) based cone-beam CT (CB-CT) setup. Experiments have been performed on a high precision bench-top system with rotating object table, fixed X-ray tube and 176 x 176 mm2 active detector area (Trixell Pixium 4800). Several objects, including CT performance-, MTF- and pelvis phantoms, have been scanned under various conditions, including a high dose setup in order to explore the 3D performance limits. Under these optimal conditions, the system is capable of resolving less than 1% (~10 HU) contrast in a water background. Within a pelvis phantom, even inserts of muscle and fat equivalent are clearly distinguishable. This also holds for fast acquisitions of up to 40 fps. Focusing on the spatial resolution, we obtain an almost isotropic three-dimensional resolution of up to 30 lp/cm at 10% modulation.

  15. Preliminary Evaluation of Thematic Mapper Image Data Quality

    NASA Technical Reports Server (NTRS)

    Macdonald, R. B.; Hall, F. G.; Pitts, D. E.; Bizzell, R. M.; Yao, S.; Sorensen, C.; Reyna, E.; Carnes, J. G.

    1984-01-01

    Thematic Mapper (TM) data from Mississippi County, Arkansas, and Webster County, Iowa, were examined for the purpose of evaluating the image data quality of the TM which was launched on board the LANDSAT-4 spacecraft. Preliminary clustering and principal component analysis indicates that the middle infrared and thermal infrared data of TM appear to add significant information over that of the near IR and visible bands of the multispectral scanner data. Moreover, the higher spatial resolution of TM appears to provide better definition of the edges and the within variability of agricultural fields. The geometric performance of TM data, without ground control correction, was found to exceed expectations. The modulation transfer function for the 1.65 m band was found to agree with prelaunch specifications when the effects of the GSFC cubic convolution and the atmosphere were removed. The band to band registration for the bands within the noncooled focal plane was found to be better than specified. However, the middle infrared and thermal infrared, which are on a separate cooled focal plane were found to be misregistered and were significantly worse than prelaunch specifications.

  16. Cyberknife Image-Guided Delivery and Quality Assurance

    SciTech Connect

    Dieterich, Sonja; Pawlicki, Todd

    2008-05-01

    The CyberKnife is a complex, emerging technology that is a significant departure from current stereotactic radiosurgery and external beam radiotherapy technologies. In its clinical application and quality assurance (QA) approach, the CyberKnife is currently situated somewhere in between stereotactic radiosurgery and radiotherapy. The clinical QA for this image-guided treatment delivery system typically follows the vendor's guidance, mainly because of the current lack of vendor-independent QA recommendations. The problem has been exacerbated because very little published data are available for QA for the CyberKnife system, especially for QA of the interaction between individual system components. The tools and techniques for QA of the CyberKnife are under development and will continue to improve with longer clinical experience of the users. The technology itself continues to evolve, forcing continuous changes and adaptation of QA. To aid in the process of developing comprehensive guidance on CyberKnife QA, a database of errors based on users reporting incidents and corrective actions would be desirable. The goal of this work was to discuss the status of QA guidelines in the clinical implementation of the CyberKnife system. This investigation was done from the perspective of an active clinical and research site using the CyberKnife.

  17. Which histological characteristics of basal cell carcinomas influence the quality of optical coherence tomography imaging?

    NASA Astrophysics Data System (ADS)

    Mogensen, M.; Thrane, L.; Joergensen, T. M.; Nürnberg, B. M.; Jemec, G. B. E.

    2009-07-01

    We explore how histopathology parameters influence OCT imaging of basal cell carcinomas (BCC) and address whether such parameters correlate with the quality of the recorded OCT images. Our results indicate that inflammation impairs OCT imaging and that sun-damaged skin can sometimes provide more clear-cut images of skin cancer lesions using OCT imaging when compared to skin cancer surrounded by skin without sun-damage.

  18. [Principles and applications of hyperspectral imaging technique in quality and safety inspection of fruits and vegetables].

    PubMed

    Zhang, Bao-Hua; Li, Jiang-Bo; Fan, Shu-Xiang; Huang, Wen-Qian; Zhang, Chi; Wang Qing-Yan; Xiao, Guang-Dong

    2014-10-01

    The quality and safety of fruits and vegetables are the most concerns of consumers. Chemical analytical methods are traditional inspection methods which are time-consuming and labor intensive destructive inspection techniques. With the rapid development of imaging technique and spectral technique, hyperspectral imaging technique has been widely used in the nondestructive inspection of quality and safety of fruits and vegetables. Hyperspectral imaging integrates the advantages of traditional imaging and spectroscopy. It can obtain both spatial and spectral information of inspected objects. Therefore, it can be used in either external quality inspection as traditional imaging system, or internal quality or safety inspection as spectroscopy. In recent years, many research papers about the nondestructive inspection of quality and safety of fruits and vegetables by using hyperspectral imaging have been published, and in order to introduce the principles of nondestructive inspection and track the latest research development of hyperspectral imaging in the nondestructive inspection of quality and safety of fruits and vegetables, this paper reviews the principles, developments and applications of hyperspectral imaging in the external quality, internal quality and safety inspection of fruits and vegetables. Additionally, the basic components, analytical methods, future trends and challenges are also reported or discussed in this paper.

  19. Hyperspectral imaging for diagnosis and quality control in agri-food and industrial sectors

    NASA Astrophysics Data System (ADS)

    García-Allende, P. Beatriz; Conde, Olga M.; Mirapeix, Jesus; Cobo, Adolfo; Lopez-Higuera, Jose M.

    2010-04-01

    Optical spectroscopy has been utilized in various fields of science, industry and medicine, since each substance is discernible from all others by its spectral properties. However, optical spectroscopy traditionally generates information on the bulk properties of the whole sample, and mainly in the agri-food industry some product properties result from the heterogeneity in its composition. This monitoring is considerably more challenging and can be successfully achieved by the so-called hyperspectral imaging technology, which allows the simultaneous determination of the optical spectrum and the spatial location of an object in a surface. In addition, it is a nonintrusive and non-contact technique which gives rise to a great potential for industrial applications and it does not require any particular preparation of the samples, which is a primary concern in food monitoring. This work illustrates an overview of approaches based on this technology to address different problems in agri-food and industrial sectors. The hyperspectral system was originally designed and tested for raw material on-line discrimination, which is a key factor in the input stages of many industrial sectors. The combination of the acquisition of the spectral information across transversal lines while materials are being transported on a conveyor belt, and appropriate image analyses have been successfully validated in the tobacco industry. Lastly, the use of imaging spectroscopy applied to online welding quality monitoring is discussed and compared with traditional spectroscopic approaches in this regard.

  20. Medical Image Processing Server applied to Quality Control of Nuclear Medicine.

    NASA Astrophysics Data System (ADS)

    Vergara, C.; Graffigna, J. P.; Marino, E.; Omati, S.; Holleywell, P.

    2016-04-01

    This paper is framed within the area of medical image processing and aims to present the process of installation, configuration and implementation of a processing server of medical images (MIPS) in the Fundación Escuela de Medicina Nuclear located in Mendoza, Argentina (FUESMEN). It has been developed in the Gabinete de Tecnologia Médica (GA.TE.ME), Facultad de Ingeniería-Universidad Nacional de San Juan. MIPS is a software that using the DICOM standard, can receive medical imaging studies of different modalities or viewing stations, then it executes algorithms and finally returns the results to other devices. To achieve the objectives previously mentioned, preliminary tests were conducted in the laboratory. More over, tools were remotely installed in clinical enviroment. The appropiate protocols for setting up and using them in different services were established once defined those suitable algorithms. Finally, it’s important to focus on the implementation and training that is provided in FUESMEN, using nuclear medicine quality control processes. Results on implementation are exposed in this work.

  1. Performance of three image-quality metrics in ink-jet printing of plain papers

    NASA Astrophysics Data System (ADS)

    Lee, David L.; Winslow, Alan T.

    1993-07-01

    Three image-quality metrics are evaluated: Hamerly's edge raggedness, or tangential edge profile; Granger and Cupery's subjective quality factor (SQF) derived from the second moment of the line spread function; and SQF derived from Gur and O'Donnell's reflectance transfer function. These metrics are but a handful of many in the literature. Standard office papers from North America and Europe representing a broad spectrum of what is commercially available were printed with a 300-dpi Hewlett-Packard Deskjet printer. An untrained panel of eight judges viewed text, in a variety of fonts, and a graphics target and assigned each print an integer score based on its overall quality. Analysis of the metrics revealed that Granger's SQF had the highest correlation with panel rank, and achieved a level of precision approaching single-judge error, that is, the ranking error made by an individual judge. While the other measures correlated in varying degrees, they were less precise. This paper reviews their theory, measurement, and performance.

  2. Study of quality perception in medical images based on comparison of contrast enhancement techniques in mammographic images

    NASA Astrophysics Data System (ADS)

    Matheus, B.; Verçosa, L. B.; Barufaldi, B.; Schiabel, H.

    2014-03-01

    With the absolute prevalence of digital images in mammography several new tools became available for radiologist; such as CAD schemes, digital zoom and contrast alteration. This work focuses in contrast variation and how the radiologist reacts to these changes when asked to evaluated image quality. Three contrast enhancing techniques were used in this study: conventional equalization, CCB Correction [1] - a digitization correction - and value subtraction. A set of 100 images was used in tests from some available online mammographic databases. The tests consisted of the presentation of all four versions of an image (original plus the three contrast enhanced images) to the specialist, requested to rank each one from the best up to worst quality for diagnosis. Analysis of results has demonstrated that CCB Correction [1] produced better images in almost all cases. Equalization, which mathematically produces a better contrast, was considered the worst for mammography image quality enhancement in the majority of cases (69.7%). The value subtraction procedure produced images considered better than the original in 84% of cases. Tests indicate that, for the radiologist's perception, it seems more important to guaranty full visualization of nuances than a high contrast image. Another result observed is that the "ideal" scanner curve does not yield the best result for a mammographic image. The important contrast range is the middle of the histogram, where nodules and masses need to be seen and clearly distinguished.

  3. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  4. Evaluation of image quality of MRI data for brain tumor surgery

    NASA Astrophysics Data System (ADS)

    Heckel, Frank; Arlt, Felix; Geisler, Benjamin; Zidowitz, Stephan; Neumuth, Thomas

    2016-03-01

    3D medical images are important components of modern medicine. Their usefulness for the physician depends on their quality, though. Only high-quality images allow accurate and reproducible diagnosis and appropriate support during treatment. We have analyzed 202 MRI images for brain tumor surgery in a retrospective study. Both an experienced neurosurgeon and an experienced neuroradiologist rated each available image with respect to its role in the clinical workflow, its suitability for this specific role, various image quality characteristics, and imaging artifacts. Our results show that MRI data acquired for brain tumor surgery does not always fulfill the required quality standards and that there is a significant disagreement between the surgeon and the radiologist, with the surgeon being more critical. Noise, resolution, as well as the coverage of anatomical structures were the most important criteria for the surgeon, while the radiologist was mainly disturbed by motion artifacts.

  5. Bi-species imposex monitoring in Galicia (NW Spain) shows contrasting achievement of the OSPAR Ecological Quality Objective for TBT.

    PubMed

    Ruiz, J M; Carro, B; Albaina, N; Couceiro, L; Míguez, A; Quintela, M; Barreiro, R

    2017-01-30

    Imposex is decreasing worldwide after the total ban on tributyltin (TBT) from antifouling paints. In order to assess improvement in the NE Atlantic, the OSPAR Convention designed an Ecological Quality Objective (EcoQO) based on the VDSI (vas deferens sequence index, an agreed measure of imposex) in the rock snail Nucella lapillus; wherever this is not available, the mud snail Nassarius reticulatus was proposed as a proxy. We determined VDSI in Galician populations of rock (n≥34) and mud (n≥18) snails at regular intervals from pre-ban times until 2009 and 2011, respectively. While imposex in the former started decreasing in 2006 and by 2009 the EcoQO had been met in the area, VDSI in the latter was not significantly reduced until 2011 and values contradict such an achievement. This suggests that the OSPAR imposex bi-species scheme may not be of direct application in the current post-ban scenario.

  6. Achieving high quality in ST-segment elevation myocardial infarction care: one urban academic medical center experience.

    PubMed

    Purim-Shem-Tov, Yanina A; Melgoza, Normal; Haw, Janet; Schaer, Gary L; Calvin, James E; Rumoro, Dino P

    2012-03-01

    Management of acute myocardial infarction with ST elevation (STEMI) remains a challenge for academic institutions. There are numerous factors at play from the time electrocardiogram is obtained to the time the patient arrives to a catheterization laboratory and the balloon is inflated. Academic hospitals that are located in large urban centers have to deal with staff living long distances from the facility, and therefore, assembling the catheterization team after-hours and on the weekends becomes a difficult task to achieve. There are other factors that contribute to time delays, such as, administering electrocardiograms in timely fashion, having emergency physicians activate the catheterization team, instead of contacting the cardiologist to discuss the case, and other time-sensitive factors. All of the aforementioned issues contribute to the delay. Yet, primary percutaneous coronary intervention is clearly demonstrated as the modality of choice in treatment of STEMI, which improves patient's morbidity and mortality. Therefore, it is imperative that institutions do all they can to improve their protocols and meet the core measures in the treatment of STEMI patients, including the door-to-balloon time of less than 90 minutes. Our institution started a quality improvement program for STEMI care in 1993 and has showed progressive improvement in use of aspirin, beta-blockers, angiotensin-converting enzyme inhibitors, and other medication, culminating in 95% to 100% use of these medications in 2003-2004, when we operated in accordance with the Get With The Guidelines program. Door-to-balloon time in less than 90 minutes became a new phase in our quality improvement process, and we achieved 100% compliance in the last 2 years.

  7. Survey of light sources for image display systems to achieve brightness with efficient energy

    NASA Astrophysics Data System (ADS)

    Cheng, Dah Yu; Chen, Li-Min

    1995-04-01

    This paper will review the currently available light sources, and also introduces a new, patented compound orthogonal parabolic reflector to be integrated with the light source, which focuses a relatively large light source into a very small point. The reflector creates a nearly ideal intense point source for all next generation image display systems. The proposed system is not limited by the radiation source whether it is a short arc lamp or a long tungsten filament lamp. Our technologies take the finite size of radiation sources into account to address the common problem for all reflector lamp systems, i.e., intensity and uniformity (dark hole). Successful examples will be shown on how to make the efficient intense light source match the requirements of LCD and DMD display systems. A method for reducing U.V. and I.R. radiation will also be demonstrated.

  8. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety

    PubMed Central

    Huang, Hui; Liu, Li; Ngadi, Michael O.

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  9. Multimodal Imaging and Lighting Bias Correction for Improved μPAD-based Water Quality Monitoring via Smartphones

    PubMed Central

    McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol

    2016-01-01

    Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants – Cr(VI), total chlorine, caffeine, and E. coli K12 – at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring. PMID:27283336

  10. Multimodal Imaging and Lighting Bias Correction for Improved μPAD-based Water Quality Monitoring via Smartphones

    NASA Astrophysics Data System (ADS)

    McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol

    2016-06-01

    Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants – Cr(VI), total chlorine, caffeine, and E. coli K12 – at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.

  11. Ensuring safe and quality medication use in nuclear medicine: a collaborative team achieves compliance with medication management standards.

    PubMed

    Beach, Trent A; Griffith, Karen; Dam, Hung Q; Manzone, Timothy A

    2012-03-01

    As hospital nuclear medicine departments were established in the 1960s and 1970s, each department developed detailed policies and procedures to meet the specialized and specific handling requirements of radiopharmaceuticals. In many health systems, radiopharmaceuticals are still unique as the only drugs not under the control of the health system pharmacy; however, the clear trend--and now an accreditation requirement--is to merge radiopharmaceutical management with the overall health system medication management system. Accomplishing this can be a challenge for both nuclear medicine and pharmacy because each lacks knowledge of the specifics and needs of the other field. In this paper we will first describe medication management standards, what they cover, and how they are enforced. We will describe how we created a nuclear medicine and pharmacy team to achieve compliance, and we will present the results of their work. We will examine several specific issues raised by incorporating radiopharmaceuticals in the medication management process and describe how our team addressed those issues. Finally, we will look at how the medication management process helps ensure ongoing quality and safety to patients through multiple periodic reviews. The reader will gain an understanding of medication management standards and how they apply to nuclear medicine, learn how a nuclear medicine and pharmacy team can effectively merge nuclear medicine and pharmacy processes, and gain the ability to achieve compliance at the reader's own institution.

  12. Combining hard and soft magnetism into a single core-shell nanoparticle to achieve both hyperthermia and image contrast

    PubMed Central

    Yang, Qiuhong; Gong, Maogang; Cai, Shuang; Zhang, Ti; Douglas, Justin T; Chikan, Viktor; Davies, Neal M; Lee, Phil; Choi, In-Young; Ren, Shenqiang; Forrest, M Laird

    2015-01-01

    Background A biocompatible core/shell structured magnetic nanoparticles (MNPs) was developed to mediate simultaneous cancer therapy and imaging. Methods & results A 22-nm MNP was first synthesized via magnetically coupling hard (FePt) and soft (Fe3O4) materials to produce high relative energy transfer. Colloidal stability of the FePt@Fe3O4 MNPs was achieved through surface modification with silane-polyethylene glycol (PEG). Intravenous administration of PEG-MNPs into tumor-bearing mice resulted in a sustained particle accumulation in the tumor region, and the tumor burden of treated mice was a third that of the mice in control groups 2 weeks after a local hyperthermia treatment. In vivo magnetic resonance imaging exhibited enhanced T2 contrast in the tumor region. Conclusion This work has demonstrated the feasibility of cancer theranostics with PEG-MNPs. PMID:26606855

  13. Quantitative measurement of holographic image quality using Adobe Photoshop

    NASA Astrophysics Data System (ADS)

    Wesly, E.

    2013-02-01

    Measurement of the characteristics of image holograms in regards to diffraction efficiency and signal to noise ratio are demonstrated, using readily available digital cameras and image editing software. Illustrations and case studies, using currently available holographic recording materials, are presented.

  14. Non-reference quality assessment of infrared images reconstructed by compressive sensing

    NASA Astrophysics Data System (ADS)

    Ospina-Borras, J. E.; Benitez-Restrepo, H. D.

    2015-01-01

    Infrared (IR) images are representations of the world and have natural features like images in the visible spectrum. As such, natural features from infrared images support image quality assessment (IQA).1 In this work, we compare the quality of a set of indoor and outdoor IR images reconstructed from measurement functions formed by linear combination of their pixels. The reconstruction methods are: linear discrete cosine transform (DCT) acquisition, DCT augmented with total variation minimization, and compressive sensing scheme. Peak Signal to Noise Ratio (PSNR), three full-reference (FR), and four no-reference (NR) IQA measures compute the qualities of each reconstruction: multi-scale structural similarity (MSSIM), visual information fidelity (VIF), information fidelity criterion (IFC), sharpness identification based on local phase coherence (LPC-SI), blind/referenceless image spatial quality evaluator (BRISQUE), naturalness image quality evaluator (NIQE) and gradient singular value decomposition (GSVD), respectively. Each measure is compared to human scores that were obtained by differential mean opinion score (DMOS) test. We observe that GSVD has the highest correlation coefficients of all NR measures, but all FR have better performance. We use MSSIM to compare the reconstruction methods and we find that CS scheme produces a good-quality IR image, using only 30000 random sub-samples and 1000 DCT coefficients (2%). In contrast, linear DCT provides higher correlation coefficients than CS scheme by using all the pixels of the image and 31000 DCT (47%) coefficients.

  15. An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1

    DTIC Science & Technology

    1992-11-01

    2,529x1.578 505x315 Room 2,1 10x2,695 422x539 Turbans 2,523xl,617 504x323 Chapter 4 Achieving High Compression Ratios 41 CL Cuj 0. U. 42 Chater 4Achieing...NJ. Nelson, M. (1991). The data compression book, M&T Books, Redwood City , CA. "Software listings," Dr. Dobb’s Journal. (1991). M&T Books, Redwood... City , CA. 48 References Bibliography Barnsley, M. (1992). "Methods and apparatus for image compression by iterated function system," U.S. Patent

  16. Image Quality Analysis of Eyes Undergoing LASER Refractive Surgery

    PubMed Central

    Sarkar, Samrat; Vaddavalli, Pravin Krishna; Bharadwaj, Shrikant R.

    2016-01-01

    Laser refractive surgery for myopia increases the eye’s higher-order wavefront aberrations (HOA’s). However, little is known about the impact of such optical degradation on post-operative image quality (IQ) of these eyes. This study determined the relation between HOA’s and IQ parameters (peak IQ, dioptric focus that maximized IQ and depth of focus) derived from psychophysical (logMAR acuity) and computational (logVSOTF) through-focus curves in 45 subjects (18 to 31yrs) before and 1-month after refractive surgery and in 40 age-matched emmetropic controls. Computationally derived peak IQ and its best focus were negatively correlated with the RMS deviation of all HOA’s (HORMS) (r≥-0.5; p<0.001 for all). Computational depth of focus was positively correlated with HORMS (r≥0.55; p<0.001 for all) and negatively correlated with peak IQ (r≥-0.8; p<0.001 for all). All IQ parameters related to logMAR acuity were poorly correlated with HORMS (r≤|0.16|; p>0.16 for all). Increase in HOA’s after refractive surgery is therefore associated with a decline in peak IQ and a persistence of this sub-standard IQ over a larger dioptric range, vis-à-vis, before surgery and in age-matched controls. This optical deterioration however does not appear to significantly alter psychophysical IQ, suggesting minimal impact of refractive surgery on the subject’s ability to resolve spatial details and their tolerance to blur. PMID:26859302

  17. Fast super-resolution imaging with ultra-high labeling density achieved by joint tagging super-resolution optical fluctuation imaging.

    PubMed

    Zeng, Zhiping; Chen, Xuanze; Wang, Hening; Huang, Ning; Shan, Chunyan; Zhang, Hao; Teng, Junlin; Xi, Peng

    2015-02-10

    Previous stochastic localization-based super-resolution techniques are largely limited by the labeling density and the fidelity to the morphology of specimen. We report on an optical super-resolution imaging scheme implementing joint tagging using multiple fluorescent blinking dyes associated with super-resolution optical fluctuation imaging (JT-SOFI), achieving ultra-high labeling density super-resolution imaging. To demonstrate the feasibility of JT-SOFI, quantum dots with different emission spectra were jointly labeled to the tubulin in COS7 cells, creating ultra-high density labeling. After analyzing and combining the fluorescence intermittency images emanating from spectrally resolved quantum dots, the microtubule networks are capable of being investigated with high fidelity and remarkably enhanced contrast at sub-diffraction resolution. The spectral separation also significantly decreased the frame number required for SOFI, enabling fast super-resolution microscopy through simultaneous data acquisition. As the joint-tagging scheme can decrease the labeling density in each spectral channel, thereby bring it closer to single-molecule state, we can faithfully reconstruct the continuous microtubule structure with high resolution through collection of only 100 frames per channel. The improved continuity of the microtubule structure is quantitatively validated with image skeletonization, thus demonstrating the advantage of JT-SOFI over other localization-based super-resolution methods.

  18. The Quality Assurance in Diagnostic Radiology and their Effect in the Quality Image and Radiological Protection of the Patient

    NASA Astrophysics Data System (ADS)

    Gaona, Enrique

    2002-08-01

    The quality assurance in diagnostic radiology in Mexico before 1997 was virtually nonexistent except in few academic institutions and hospitals. The purpose of this study was to carry out an exploratory survey of the issue of quality control parameters of general and fluoroscopy x-ray systems in the Mexican Republic and their effects in the quality image and radiological protection of the patient. A general result of the survey is that there is not significant difference in the observed frequencies among public and private radiology departments for α = 0.05, then the results are valid for both departments. 37% of x-ray systems belong to public radiology departments. In the radiology departments that didn't agree with the Mexican regulations in: light field to mach the x-ray field, light field intensity, kV, time and output. In those cases, we found a repeat rate of radiography studies >30% with non necessary dose to patient, low quality image and high operating costs of the radiology service. We found in x-ray fiuoroscopy systems that 62% had a low quality image due to electronic noise in the television chain. In general the x-ray systems that didn't agree with Mexican regulations are 35% and they can affect in a way or other the quality image and the dose to patient.

  19. No-Reference Image Quality Assessment for ZY3 Imagery in Urban Areas Using Statistical Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Cui, W. H.; Yang, F.; Wu, Z. C.

    20