Sample records for kekkannai sanjigen imaging

  1. Twin imaging phenomenon of integral imaging.

    PubMed

    Hu, Juanmei; Lou, Yimin; Wu, Fengmin; Chen, Aixi

    2018-05-14

    The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.

  2. NIH Image to ImageJ: 25 years of Image Analysis

    PubMed Central

    Schneider, Caroline A.; Rasband, Wayne S.; Eliceiri, Kevin W.

    2017-01-01

    For the past twenty five years the NIH family of imaging software, NIH Image and ImageJ have been pioneers as open tools for scientific image analysis. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects. PMID:22930834

  3. Retinal imaging and image analysis.

    PubMed

    Abràmoff, Michael D; Garvin, Mona K; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships.

  4. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  5. ImageX: new and improved image explorer for astronomical images and beyond

    NASA Astrophysics Data System (ADS)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan

  6. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  7. Image registration via optimization over disjoint image regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitts, Todd; Hathaway, Simon; Karelitz, David B.

    Technologies pertaining to registering a target image with a base image are described. In a general embodiment, the base image is selected from a set of images, and the target image is an image in the set of images that is to be registered to the base image. A set of disjoint regions of the target image is selected, and a transform to be applied to the target image is computed based on the optimization of a metric over the selected set of disjoint regions. The transform is applied to the target image so as to register the target imagemore » with the base image.« less

  8. Ultrasonic image analysis and image-guided interventions.

    PubMed

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  9. Imaging windows for long-term intravital imaging

    PubMed Central

    Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco

    2014-01-01

    Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure. PMID:28243510

  10. Improved image alignment method in application to X-ray images and biological images.

    PubMed

    Wang, Ching-Wei; Chen, Hsiang-Chou

    2013-08-01

    Alignment of medical images is a vital component of a large number of applications throughout the clinical track of events; not only within clinical diagnostic settings, but prominently so in the area of planning, consummation and evaluation of surgical and radiotherapeutical procedures. However, image registration of medical images is challenging because of variations on data appearance, imaging artifacts and complex data deformation problems. Hence, the aim of this study is to develop a robust image alignment method for medical images. An improved image registration method is proposed, and the method is evaluated with two types of medical data, including biological microscopic tissue images and dental X-ray images and compared with five state-of-the-art image registration techniques. The experimental results show that the presented method consistently performs well on both types of medical images, achieving 88.44 and 88.93% averaged registration accuracies for biological tissue images and X-ray images, respectively, and outperforms the benchmark methods. Based on the Tukey's honestly significant difference test and Fisher's least square difference test tests, the presented method performs significantly better than all existing methods (P ≤ 0.001) for tissue image alignment, and for the X-ray image registration, the proposed method performs significantly better than the two benchmark b-spline approaches (P < 0.001). The software implementation of the presented method and the data used in this study are made publicly available for scientific communities to use (http://www-o.ntust.edu.tw/∼cweiwang/ImprovedImageRegistration/). cweiwang@mail.ntust.edu.tw.

  11. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  12. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  13. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  14. Basic concepts of MR imaging, diffusion MR imaging, and diffusion tensor imaging.

    PubMed

    de Figueiredo, Eduardo H M S G; Borgonovi, Arthur F N G; Doring, Thomas M

    2011-02-01

    MR image contrast is based on intrinsic tissue properties and specific pulse sequences and parameter adjustments. A growing number of MRI imaging applications are based on diffusion properties of water. To better understand MRI diffusion-weighted imaging, a brief overview of MR physics is presented in this article followed by physics of the evolving techniques of diffusion MR imaging and diffusion tensor imaging. Copyright © 2011. Published by Elsevier Inc.

  15. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  16. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  17. Images

    Science.gov Websites

    : Upload Date Photo Date 1 2 3 4 5 Next Arctic Edge 2018 Download Full Image Photo Details Arctic Edge 2018 Download Full Image Photo Details Arctic Edge 2018 Download Full Image Photo Details Arctic Edge 2018 Download Full Image Photo Details Arctic Edge 2018 Download Full Image Photo Details Arctic Edge 2018

  18. Image Use Fees | Galaxy of Images

    Science.gov Websites

    This site has moved! Please go to our new Image Gallery site! dot header Image Use Fees Licensing , research and study purposes only. For current pricing, please download our Image Use Fee Schedule See our Frequently Asked Questions (FAQ) list for additional information. Purchase an image now Contact Information

  19. Far Ultraviolet Imaging from the Image Spacecraft

    NASA Technical Reports Server (NTRS)

    Mende, S. B.; Heetderks, H.; Frey, H. U.; Lampton, M.; Geller, S. P.; Stock, J. M.; Abiad, R.; Siegmund, O. H. W.; Tremsin, A. S.; Habraken, S.

    2000-01-01

    Direct imaging of the magnetosphere by the IMAGE spacecraft will be supplemented by observation of the global aurora. The IMAGE satellite instrument complement includes three Far Ultraviolet (FUV) instruments. The Wideband Imaging Camera (WIC) will provide broad band ultraviolet images of the aurora for maximum spatial and temporal resolution by imaging the LBH N2 bands of the aurora. The Spectrographic Imager (SI), a novel form of monochromatic imager, will image the aurora, filtered by wavelength. The proton-induced component of the aurora will be imaged separately by measuring the Doppler-shifted Lyman-a. Finally, the GEO instrument will observe the distribution of the geocoronal emission to obtain the neutral background density source for charge exchange in the magnetosphere. The FUV instrument complement looks radially outward from the rotating IMAGE satellite and, therefore, it spends only a short time observing the aurora and the Earth during each spin. To maximize photon collection efficiency and use efficiently the short time available for exposures the FUV auroral imagers WIC and SI both have wide fields of view and take data continuously as the auroral region proceeds through the field of view. To minimize data volume, the set of multiple images are electronically co-added by suitably shifting each image to compensate for the spacecraft rotation. In order to minimize resolution loss, the images have to be distort ion-corrected in real time. The distortion correction is accomplished using high speed look up tables that are pre-generated by least square fitting to polynomial functions by the on-orbit processor. The instruments were calibrated individually while on stationary platforms, mostly in vacuum chambers. Extensive ground-based testing was performed with visible and near UV simulators mounted on a rotating platform to emulate their performance on a rotating spacecraft.

  20. Novel snapshot hyperspectral imager for fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Chandler, Lynn; Chandler, Andrea; Periasamy, Ammasi

    2018-02-01

    Hyperspectral imaging has emerged as a new technique for the identification and classification of biological tissue1. Benefitting recent developments in sensor technology, the new class of hyperspectral imagers can capture entire hypercubes with single shot operation and it shows great potential for real-time imaging in biomedical sciences. This paper explores the use of a SnapShot imager in fluorescence imaging via microscope for the very first time. Utilizing the latest imaging sensor, the Snapshot imager is both compact and attachable via C-mount to any commercially available light microscope. Using this setup, fluorescence hypercubes of several cells were generated, containing both spatial and spectral information. The fluorescence images were acquired with one shot operation for all the emission range from visible to near infrared (VIS-IR). The paper will present the hypercubes obtained images from example tissues (475-630nm). This study demonstrates the potential of application in cell biology or biomedical applications for real time monitoring.

  1. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  2. Image formation analysis and high resolution image reconstruction for plenoptic imaging systems.

    PubMed

    Shroff, Sapna A; Berkner, Kathrin

    2013-04-01

    Plenoptic imaging systems are often used for applications like refocusing, multimodal imaging, and multiview imaging. However, their resolution is limited to the number of lenslets. In this paper we investigate paraxial, incoherent, plenoptic image formation, and develop a method to recover some of the resolution for the case of a two-dimensional (2D) in-focus object. This enables the recovery of a conventional-resolution, 2D image from the data captured in a plenoptic system. We show simulation results for a plenoptic system with a known response and Gaussian sensor noise.

  3. Image quality assessment metric for frame accumulated image

    NASA Astrophysics Data System (ADS)

    Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling

    2018-01-01

    The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.

  4. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  5. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  6. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  7. Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture

    DOEpatents

    Lassahn, Gordon D.; Lancaster, Gregory D.; Apel, William A.; Thompson, Vicki S.

    2013-01-08

    Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture are described. According to one embodiment, an image portion identification method includes accessing data regarding an image depicting a plurality of biological substrates corresponding to at least one biological sample and indicating presence of at least one biological indicator within the biological sample and, using processing circuitry, automatically identifying a portion of the image depicting one of the biological substrates but not others of the biological substrates.

  8. MR imaging of meniscal tears: comparison of intermediate-weighted FRFSE imaging with intermediate-weighted FSE imaging.

    PubMed

    Tokuda, Osamu; Harada, Yuko; Ueda, Takaaki; Iida, Etsushi; Shiraishi, Gen; Motomura, Tetsuhisa; Fukuda, Kouji; Matsunaga, Naofumi

    2012-11-01

    We compared intermediate-weighted fast spin-echo (IW-FSE) images with intermediate-weighted fast-recovery FSE (IW-FRFSE) images in the diagnosis of meniscal tears. First, 64 patients were recruited, and the arthroscopic findings (n = 40) and image analysis (n = 19) identified 59 torn menisci with 36 patients. Both the diagnostic performance and image quality in assessing meniscal tears was evaluated for IW-FSE and IW-FRFSE images using a four-point scale. Signal-to-noise ratio (SNR) calculation was performed for both sets of images. IW-FRFSE image specificity (100 %) for diagnosing the posterior horn of the medial meniscus (MM) tear with reader 1 was significantly higher than that of IW-FSE images (90 %). Mean ratings of the contrast between the lesion and normal signal intensity within the meniscus were significantly higher for the IW-FRFSE image ratings than the IW-FSE images in most meniscal tears. Mean SNRs were significantly higher for IW-FSE images than for IW-FRFSE images (P < 0.05). IW-FRFSE imaging can be used as an alternative to the IW-FSE imaging to evaluate meniscal tears.

  9. Television Images and Adolescent Girls' Body Image Disturbance.

    ERIC Educational Resources Information Center

    Botta, Renee A.

    1999-01-01

    Contributes to scholarship on the effects of media images on adolescents, using social-comparison theory and critical-viewing theory. Finds that media do have an impact on body-image disturbance. Suggests that body-image processing is the key to understanding how television images affect adolescent girls' body-image attitudes and behaviors. (SR)

  10. Imaging and Analytics: The changing face of Medical Imaging

    NASA Astrophysics Data System (ADS)

    Foo, Thomas

    There have been significant technological advances in imaging capability over the past 40 years. Medical imaging capabilities have developed rapidly, along with technology development in computational processing speed and miniaturization. Moving to all-digital, the number of images that are acquired in a routine clinical examination has increased dramatically from under 50 images in the early days of CT and MRI to more than 500-1000 images today. The staggering number of images that are routinely acquired poses significant challenges for clinicians to interpret the data and to correctly identify the clinical problem. Although the time provided to render a clinical finding has not substantially changed, the amount of data available for interpretation has grown exponentially. In addition, the image quality (spatial resolution) and information content (physiologically-dependent image contrast) has also increased significantly with advances in medical imaging technology. On its current trajectory, medical imaging in the traditional sense is unsustainable. To assist in filtering and extracting the most relevant data elements from medical imaging, image analytics will have a much larger role. Automated image segmentation, generation of parametric image maps, and clinical decision support tools will be needed and developed apace to allow the clinician to manage, extract and utilize only the information that will help improve diagnostic accuracy and sensitivity. As medical imaging devices continue to improve in spatial resolution, functional and anatomical information content, image/data analytics will be more ubiquitous and integral to medical imaging capability.

  11. To Image...or Not to Image?

    ERIC Educational Resources Information Center

    Bruley, Karina

    1996-01-01

    Provides a checklist of considerations for installing document image processing with an electronic document management system. Other topics include scanning; indexing; the image file life cycle; benefits of imaging; document-driven workflow; and planning for workplace changes like postsorting, creating a scanning room, redeveloping job tasks and…

  12. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.

  13. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  14. Edge-based correlation image registration for multispectral imaging

    DOEpatents

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  15. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    PubMed

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  16. 3D ultrasound imaging in image-guided intervention.

    PubMed

    Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare

    2014-01-01

    Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.

  17. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  18. Hyperspectral small animal fluorescence imaging: spectral selection imaging

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Hall, Heidi; Vizard, Douglas; Robinson, J. Paul

    2008-02-01

    Molecular imaging is a rapidly growing area of research, fueled by needs in pharmaceutical drug-development for methods for high-throughput screening, pre-clinical and clinical screening for visualizing tumor growth and drug targeting, and a growing number of applications in the molecular biology fields. Small animal fluorescence imaging employs fluorescent probes to target molecular events in vivo, with a large number of molecular targeting probes readily available. The ease at which new targeting compounds can be developed, the short acquisition times, and the low cost (compared to microCT, MRI, or PET) makes fluorescence imaging attractive. However, small animal fluorescence imaging suffers from high optical scattering, absorption, and autofluorescence. Much of these problems can be overcome through multispectral imaging techniques, which collect images at different fluorescence emission wavelengths, followed by analysis, classification, and spectral deconvolution methods to isolate signals from fluorescence emission. We present an alternative to the current method, using hyperspectral excitation scanning (spectral selection imaging), a technique that allows excitation at any wavelength in the visible and near-infrared wavelength range. In many cases, excitation imaging may be more effective at identifying specific fluorescence signals because of the higher complexity of the fluorophore excitation spectrum. Because the excitation is filtered and not the emission, the resolution limit and image shift imposed by acousto-optic tunable filters have no effect on imager performance. We will discuss design of the imager, optimizing the imager for use in small animal fluorescence imaging, and application of spectral analysis and classification methods for identifying specific fluorescence signals.

  19. Innovations in Nuclear Imaging Instrumentation: Cerenkov Imaging.

    PubMed

    Tamura, Ryo; Pratt, Edwin C; Grimm, Jan

    2018-07-01

    Cerenkov luminescence (CL) is blue glow light produced by charged subatomic particles travelling faster than the phase velocity of light in a dielectric medium such as water or tissue. CL was first discovered in 1934, but for biomedical research it was recognized only in 2009 after advances in optical camera sensors brought the required high sensitivity. Recently, applications of CL from clinical radionuclides have been rapidly expanding to include not only preclinical and clinical biomedical imaging but also an approach to therapy. Cerenkov Luminescence Imaging (CLI) utilizes CL generated from clinically relevant radionuclides alongside optical imaging instrumentation. CLI is advantageous over traditional nuclear imaging methods in terms of infrastructure cost, resolution, and imaging time. Furthermore, CLI is a truly multimodal imaging method where the same agent can be detected by two independent modalities, with optical (CL) imaging and with positron emission tomography (PET) imaging. CL has been combined with small molecules, biomolecules and nanoparticles to improve diagnosis and therapy in cancer research. Here, we cover the fundamental breakthroughs and recent advances in reagents and instrumentation methods for CLI as well as therapeutic application of CL. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Image alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dowell, Larry Jonathan

    Disclosed is a method and device for aligning at least two digital images. An embodiment may use frequency-domain transforms of small tiles created from each image to identify substantially similar, "distinguishing" features within each of the images, and then align the images together based on the location of the distinguishing features. To accomplish this, an embodiment may create equal sized tile sub-images for each image. A "key" for each tile may be created by performing a frequency-domain transform calculation on each tile. A information-distance difference between each possible pair of tiles on each image may be calculated to identify distinguishingmore » features. From analysis of the information-distance differences of the pairs of tiles, a subset of tiles with high discrimination metrics in relation to other tiles may be located for each image. The subset of distinguishing tiles for each image may then be compared to locate tiles with substantially similar keys and/or information-distance metrics to other tiles of other images. Once similar tiles are located for each image, the images may be aligned in relation to the identified similar tiles.« less

  1. Image stitching and image reconstruction of intestines captured using radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau

    2012-05-01

    This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.

  2. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Registration of angiographic image on real-time fluoroscopic image for image-guided percutaneous coronary intervention.

    PubMed

    Kim, Dongkue; Park, Sangsoo; Jeong, Myung Ho; Ryu, Jeha

    2018-02-01

    In percutaneous coronary intervention (PCI), cardiologists must study two different X-ray image sources: a fluoroscopic image and an angiogram. Manipulating a guidewire while alternately monitoring the two separate images on separate screens requires a deep understanding of the anatomy of coronary vessels and substantial training. We propose 2D/2D spatiotemporal image registration of the two images in a single image in order to provide cardiologists with enhanced visual guidance in PCI. The proposed 2D/2D spatiotemporal registration method uses a cross-correlation of two ECG series in each image to temporally synchronize two separate images and register an angiographic image onto the fluoroscopic image. A guidewire centerline is then extracted from the fluoroscopic image in real time, and the alignment of the centerline with vessel outlines of the chosen angiographic image is optimized using the iterative closest point algorithm for spatial registration. A proof-of-concept evaluation with a phantom coronary vessel model with engineering students showed an error reduction rate greater than 74% on wrong insertion to nontarget branches compared to the non-registration method and more than 47% reduction in the task completion time in performing guidewire manipulation for very difficult tasks. Evaluation with a small number of experienced doctors shows a potentially significant reduction in both task completion time and error rate for difficult tasks. The total registration time with real procedure X-ray (angiographic and fluoroscopic) images takes [Formula: see text] 60 ms, which is within the fluoroscopic image acquisition rate of 15 Hz. By providing cardiologists with better visual guidance in PCI, the proposed spatiotemporal image registration method is shown to be useful in advancing the guidewire to the coronary vessel branches, especially those difficult to insert into.

  4. Multispectral image enhancement processing for microsat-borne imager

    NASA Astrophysics Data System (ADS)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  5. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance

    PubMed Central

    Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang

    2015-01-01

    We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249

  6. Mirror-Image Equivalence and Interhemispheric Mirror-Image Reversal

    PubMed Central

    Corballis, Michael C.

    2018-01-01

    Mirror-image confusions are common, especially in children and in some cases of neurological impairment. They can be a special impediment in activities such as reading and writing directional scripts, where mirror-image patterns (such as b and d) must be distinguished. Treating mirror images as equivalent, though, can also be adaptive in the natural world, which carries no systematic left-right bias and where the same object or event can appear in opposite viewpoints. Mirror-image equivalence and confusion are natural consequences of a bilaterally symmetrical brain. In the course of learning, mirror-image equivalence may be established through a process of symmetrization, achieved through homotopic interhemispheric exchange in the formation of memory circuits. Such circuits would not distinguish between mirror images. Learning to discriminate mirror-image discriminations may depend either on existing brain asymmetries, or on extensive learning overriding the symmetrization process. The balance between mirror-image equivalence and mirror-image discrimination may nevertheless be precarious, with spontaneous confusions or reversals, such as mirror writing, sometimes appearing naturally or as a manifestation of conditions like dyslexia. PMID:29706878

  7. Mirror-Image Equivalence and Interhemispheric Mirror-Image Reversal.

    PubMed

    Corballis, Michael C

    2018-01-01

    Mirror-image confusions are common, especially in children and in some cases of neurological impairment. They can be a special impediment in activities such as reading and writing directional scripts, where mirror-image patterns (such as b and d ) must be distinguished. Treating mirror images as equivalent, though, can also be adaptive in the natural world, which carries no systematic left-right bias and where the same object or event can appear in opposite viewpoints. Mirror-image equivalence and confusion are natural consequences of a bilaterally symmetrical brain. In the course of learning, mirror-image equivalence may be established through a process of symmetrization, achieved through homotopic interhemispheric exchange in the formation of memory circuits. Such circuits would not distinguish between mirror images. Learning to discriminate mirror-image discriminations may depend either on existing brain asymmetries, or on extensive learning overriding the symmetrization process. The balance between mirror-image equivalence and mirror-image discrimination may nevertheless be precarious, with spontaneous confusions or reversals, such as mirror writing, sometimes appearing naturally or as a manifestation of conditions like dyslexia.

  8. Image reconstruction of dynamic infrared single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin

    2018-03-01

    Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.

  9. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  10. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  11. A novel x-ray imaging system and its imaging performance

    NASA Astrophysics Data System (ADS)

    Yu, Chunyu; Chang, Benkang; Wang, Shiyun; Zhang, Junju; Yao, Xiao

    2006-09-01

    Since x-ray was discovered and applied to the imaging technology, the x-ray imaging techniques have experienced several improvements, from film-screen, x-ray image intensifier, CR to DR. To store and transmit the image information conveniently, the digital imaging is necessary for the imaging techniques in medicine and biology. Usually as the intensifying screen technique as for concerned, to get the digital image signals, the CCD was lens coupled directly to the screen, but which suffers from a loss of x-ray signal and resulted in the poor x-ray image perfonnance. Therefore, to improve the image performance, we joined the brightness intensifier, which, was named the Low Light Level (LLL) image intensifier in military affairs, between the intensifying screen and the CCD and designed the novel x-ray imaging system. This design method improved the image performance of the whole system thus decreased the x-ray dose. Comparison between two systems with and without the brightness intensifier was given in detail in this paper. Moreover, the main noise source of the image produced by the novel system was analyzed, and in this paper, the original images produced by the novel x-ray imaging system and the processed images were given respectively. It was clear that the image performance was satisfied and the x-ray imaging system can be used in security checking and many other nondestructive checking fields.

  12. Cerenkov imaging - a new modality for molecular imaging

    PubMed Central

    Thorek, Daniel LJ; Robertson, Robbie; Bacchus, Wassifa A; Hahn, Jaeseung; Rothberg, Julie; Beattie, Bradley J; Grimm, Jan

    2012-01-01

    Cerenkov luminescence imaging (CLI) is an emerging hybrid modality that utilizes the light emission from many commonly used medical isotopes. Cerenkov radiation (CR) is produced when charged particles travel through a dielectric medium faster than the speed of light in that medium. First described in detail nearly 100 years ago, CR has only recently applied for biomedical imaging purposes. The modality is of considerable interest as it enables the use of widespread luminescence imaging equipment to visualize clinical diagnostic (all PET radioisotopes) and many therapeutic radionuclides. The amount of light detected in CLI applications is significantly lower than other that in other optical imaging techniques such as bioluminescence and fluorescence. However, significant advantages include the use of approved radiotracers and lack of an incident light source, resulting in high signal to background ratios. As well, multiple subjects may be imaged concurrently (up to 5 in common bioluminescent equipment), conferring both cost and time benefits. This review summarizes the field of Cerenkov luminescence imaging to date. Applications of CLI discussed include intraoperative radionuclide-guided surgery, monitoring of therapeutic efficacy, tomographic optical imaging capabilities, and the ability to perform multiplexed imaging using fluorophores excited by the Cerenkov radiation. While technical challenges still exist, Cerenkov imaging has materialized as an important molecular imaging modality. PMID:23133811

  13. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  14. Nanoparticle imaging probes for molecular imaging with computed tomography and application to cancer imaging

    NASA Astrophysics Data System (ADS)

    Roeder, Ryan K.; Curtis, Tyler E.; Nallathamby, Prakash D.; Irimata, Lisa E.; McGinnity, Tracie L.; Cole, Lisa E.; Vargo-Gogola, Tracy; Cowden Dahl, Karen D.

    2017-03-01

    Precision imaging is needed to realize precision medicine in cancer detection and treatment. Molecular imaging offers the ability to target and identify tumors, associated abnormalities, and specific cell populations with overexpressed receptors. Nuclear imaging and radionuclide probes provide high sensitivity but subject the patient to a high radiation dose and provide limited spatiotemporal information, requiring combined computed tomography (CT) for anatomic imaging. Therefore, nanoparticle contrast agents have been designed to enable molecular imaging and improve detection in CT alone. Core-shell nanoparticles provide a powerful platform for designing tailored imaging probes. The composition of the core is chosen for enabling strong X-ray contrast, multi-agent imaging with photon-counting spectral CT, and multimodal imaging. A silica shell is used for protective, biocompatible encapsulation of the core composition, volume-loading fluorophores or radionuclides for multimodal imaging, and facile surface functionalization with antibodies or small molecules for targeted delivery. Multi-agent (k-edge) imaging and quantitative molecular imaging with spectral CT was demonstrated using current clinical agents (iodine and BaSO4) and a proposed spectral library of contrast agents (Gd2O3, HfO2, and Au). Bisphosphonate-functionalized Au nanoparticles were demonstrated to enhance sensitivity and specificity for the detection of breast microcalcifications by conventional radiography and CT in both normal and dense mammary tissue using murine models. Moreover, photon-counting spectral CT enabled quantitative material decomposition of the Au and calcium signals. Immunoconjugated Au@SiO2 nanoparticles enabled highly-specific targeting of CD133+ ovarian cancer stem cells for contrast-enhanced detection in model tumors.

  15. Digital Imaging

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Digital Imaging is the computer processed numerical representation of physical images. Enhancement of images results in easier interpretation. Quantitative digital image analysis by Perceptive Scientific Instruments, locates objects within an image and measures them to extract quantitative information. Applications are CAT scanners, radiography, microscopy in medicine as well as various industrial and manufacturing uses. The PSICOM 327 performs all digital image analysis functions. It is based on Jet Propulsion Laboratory technology, is accurate and cost efficient.

  16. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  17. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  18. Uncooled thermal imaging and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun

    2006-09-01

    Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.

  19. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  20. Imager for Mars Pathfinder (IMP) image calibration

    USGS Publications Warehouse

    Reid, R.J.; Smith, P.H.; Lemmon, M.; Tanner, R.; Burkland, M.; Wegryn, E.; Weinberg, J.; Marcialis, R.; Britt, D.T.; Thomas, N.; Kramm, R.; Dummel, A.; Crowe, D.; Bos, B.J.; Bell, J.F.; Rueffer, P.; Gliem, F.; Johnson, J. R.; Maki, J.N.; Herkenhoff, K. E.; Singer, Robert B.

    1999-01-01

    The Imager for Mars Pathfinder returned over 16,000 high-quality images from the surface of Mars. The camera was well-calibrated in the laboratory, with <5% radiometric uncertainty. The photometric properties of two radiometric targets were also measured with 3% uncertainty. Several data sets acquired during the cruise and on Mars confirm that the system operated nominally throughout the course of the mission. Image calibration algorithms were developed for landed operations to correct instrumental sources of noise and to calibrate images relative to observations of the radiometric targets. The uncertainties associated with these algorithms as well as current improvements to image calibration are discussed. Copyright 1999 by the American Geophysical Union.

  1. WND-CHARM: Multi-purpose image classification using compound image transforms

    PubMed Central

    Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.

    2008-01-01

    We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301

  2. Robust image modeling techniques with an image restoration application

    NASA Astrophysics Data System (ADS)

    Kashyap, Rangasami L.; Eom, Kie-Bum

    1988-08-01

    A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.

  3. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  4. Cartilage imaging in children: current indications, magnetic resonance imaging techniques, and imaging findings.

    PubMed

    Ho-Fung, Victor M; Jaramillo, Diego

    2013-07-01

    Evaluation of hyaline cartilage in pediatric patients requires in-depth understanding of normal physiologic changes in the developing skeleton. Magnetic resonance (MR) imaging is a powerful tool for morphologic and functional imaging of the cartilage. In this review article, current imaging indications for cartilage evaluation pertinent to the pediatric population are described. In particular, novel surgical techniques for cartilage repair and MR classification of cartilage injuries are summarized. The authors also provide a review of the normal anatomy and a concise description of the advances in quantitative cartilage imaging (ie, T2 mapping, delayed gadolinium-enhanced MR imaging of cartilage, and T1rho). Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Spatial-scanning hyperspectral imaging probe for bio-imaging applications

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-03-01

    The three common methods to perform hyperspectral imaging are the spatial-scanning, spectral-scanning, and snapshot methods. However, only the spectral-scanning and snapshot methods have been configured to a hyperspectral imaging probe as of today. This paper presents a spatial-scanning (pushbroom) hyperspectral imaging probe, which is realized by integrating a pushbroom hyperspectral imager with an imaging probe. The proposed hyperspectral imaging probe can also function as an endoscopic probe by integrating a custom fabricated image fiber bundle unit. The imaging probe is configured by incorporating a gradient-index lens at the end face of an image fiber bundle that consists of about 50 000 individual fiberlets. The necessary simulations, methodology, and detailed instrumentation aspects that are carried out are explained followed by assessing the developed probe's performance. Resolution test targets such as United States Air Force chart as well as bio-samples such as chicken breast tissue with blood clot are used as test samples for resolution analysis and for performance validation. This system is built on a pushbroom hyperspectral imaging system with a video camera and has the advantage of acquiring information from a large number of spectral bands with selectable region of interest. The advantages of this spatial-scanning hyperspectral imaging probe can be extended to test samples or tissues residing in regions that are difficult to access with potential diagnostic bio-imaging applications.

  6. Medical Image Tamper Detection Based on Passive Image Authentication.

    PubMed

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  7. An algorithm for encryption of secret images into meaningful images

    NASA Astrophysics Data System (ADS)

    Kanso, A.; Ghebleh, M.

    2017-03-01

    Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.

  8. Image Gallery

    MedlinePlus

    ... R S T U V W X Y Z Image Gallery Share: The Image Gallery contains high-quality digital photographs available from ... Select a category below to view additional thumbnail images. Images are available for direct download in 2 ...

  9. Selections from 2017: Image Processing with AstroImageJ

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry

  10. Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging

    PubMed Central

    Quang, Tri T.; Kim, Hye-Yeong; Bao, Forrest Sheng; Papay, Francis A.; Edwards, W. Barry; Liu, Yang

    2017-01-01

    Fluorescence imaging is a powerful technique with diverse applications in intraoperative settings. Visualization of three dimensional (3D) structures and depth assessment of lesions, however, are oftentimes limited in planar fluorescence imaging systems. In this study, a novel Fluorescence Imaging Topography Scanning (FITS) system has been developed, which offers color reflectance imaging, fluorescence imaging and surface topography scanning capabilities. The system is compact and portable, and thus suitable for deployment in the operating room without disturbing the surgical flow. For system performance, parameters including near infrared fluorescence detection limit, contrast transfer functions and topography depth resolution were characterized. The developed system was tested in chicken tissues ex vivo with simulated tumors for intraoperative imaging. We subsequently conducted in vivo multimodal imaging of sentinel lymph nodes in mice using FITS and PET/CT. The PET/CT/optical multimodal images were co-registered and conveniently presented to users to guide surgeries. Our results show that the developed system can facilitate multimodal intraoperative imaging. PMID:28437441

  11. Automatic detection of blurred images in UAV image sets

    NASA Astrophysics Data System (ADS)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of

  12. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  13. Vaccine Images on Twitter: Analysis of What Images are Shared

    PubMed Central

    Dredze, Mark

    2018-01-01

    Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386

  14. Vaccine Images on Twitter: Analysis of What Images are Shared.

    PubMed

    Chen, Tao; Dredze, Mark

    2018-04-03

    Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.

  15. Optical image hiding based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu

    2016-05-01

    Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.

  16. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  17. Fast image decompression for telebrowsing of images

    NASA Technical Reports Server (NTRS)

    Miaou, Shaou-Gang; Tou, Julius T.

    1993-01-01

    Progressive image transmission (PIT) is often used to reduce the transmission time of an image telebrowsing system. A side effect of the PIT is the increase of computational complexity at the viewer's site. This effect is more serious in transform domain techniques than in other techniques. Recent attempts to reduce the side effect are futile as they create another side effect, namely, the discontinuous and unpleasant image build-up. Based on a practical assumption that image blocks to be inverse transformed are generally sparse, this paper presents a method to minimize both side effects simultaneously.

  18. Quality evaluation of pansharpened hyperspectral images generated using multispectral images

    NASA Astrophysics Data System (ADS)

    Matsuoka, Masayuki; Yoshioka, Hiroki

    2012-11-01

    Hyperspectral remote sensing can provide a smooth spectral curve of a target by using a set of higher spectral resolution detectors. The spatial resolution of the hyperspectral images, however, is generally much lower than that of multispectral images due to the lower energy of incident radiation. Pansharpening is an image-fusion technique that generates higher spatial resolution multispectral images by combining lower resolution multispectral images with higher resolution panchromatic images. In this study, higher resolution hyperspectral images were generated by pansharpening of simulated lower hyperspectral and higher multispectral data. Spectral and spatial qualities of pansharpened images, then, were accessed in relation to the spectral bands of multispectral images. Airborne hyperspectral data of AVIRIS was used in this study, and it was pansharpened using six methods. Quantitative evaluations of pansharpened image are achieved using two frequently used indices, ERGAS, and the Q index.

  19. IMAGES: An IMage Archive Generated for Exoplanet Surveys

    NASA Astrophysics Data System (ADS)

    Tanner, A.

    2010-10-01

    In the past few years, there have been a menagerie of high contrast imaging surveys which have resulted in the detection of the first brown dwarfs orbiting main sequence stars and the first directly imaged exo-planetary systems. While these discoveries are scientifically rewarding, they are rare and the majority of the images collected during these surveys show single target stars. In addition, while papers will report the number of companion non-detections down to a sensitivity limit at a specific distance from the star, the corresponding images are rarely made available to the public. To date, such data exists for over a thousand stars. Thus, we are creating IMAGES, the IMage Archive Generated for Exoplanet Searches, as a repository for high contrast images gathered from published direct imaging sub-stellar and exoplanet companion surveys. This database will serve many purposes such as 1) facilitating common proper motion confirmation for candidate companions, 2) reducing the number of redundant observations of non-detection fields, 3) providing multiplicity precursor information to better select targets for future exoplanet missions, 4) providing stringent limits on the companion fraction of stars for a wide range of age, spectral type and star formation environment, and 5) provide multi-epoch images of stars with known companions for orbital monitoring. This database will be open to the public and will be searchable and sortable and will be extremely useful for future direct imaging programs such as GPI and SPHERE as well as future planet search programs such as JWST and SIM.

  20. A hyperspectral image projector for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.

    2007-04-01

    We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the

  1. Image registration: enabling technology for image guided surgery and therapy.

    PubMed

    Sauer, Frank

    2005-01-01

    Imaging looks inside the patient's body, exposing the patient's anatomy beyond what is visible on the surface. Medical imaging has a very successful history for medical diagnosis. It also plays an increasingly important role as enabling technology for minimally invasive procedures. Interventional procedures (e.g. catheter based cardiac interventions) are traditionally supported by intra-procedure imaging (X-ray fluoro, ultrasound). There is realtime feedback, but the images provide limited information. Surgical procedures are traditionally supported with pre-operative images (CT, MR). The image quality can be very good; however, the link between images and patient has been lost. For both cases, image registration can play an essential role -augmenting intra-op images with pre-op images, and mapping pre-op images to the patient's body. We will present examples of both approaches from an application oriented perspective, covering electrophysiology, radiation therapy, and neuro-surgery. Ultimately, as the boundaries between interventional radiology and surgery are becoming blurry, also the different methods for image guidance will merge. Image guidance will draw upon a combination of pre-op and intra-op imaging together with magnetic or optical tracking systems, and enable precise minimally invasive procedures. The information is registered into a common coordinate system, and allows advanced methods for visualization such as augmented reality or advanced methods for therapy delivery such as robotics.

  2. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  3. Medical Imaging System

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The MD Image System, a true-color image processing system that serves as a diagnostic aid and tool for storage and distribution of images, was developed by Medical Image Management Systems, Huntsville, AL, as a "spinoff from a spinoff." The original spinoff, Geostar 8800, developed by Crystal Image Technologies, Huntsville, incorporates advanced UNIX versions of ELAS (developed by NASA's Earth Resources Laboratory for analysis of Landsat images) for general purpose image processing. The MD Image System is an application of this technology to a medical system that aids in the diagnosis of cancer, and can accept, store and analyze images from other sources such as Magnetic Resonance Imaging.

  4. EDITORIAL: Imaging Systems and Techniques Imaging Systems and Techniques

    NASA Astrophysics Data System (ADS)

    Giakos, George; Yang, Wuqiang; Petrou, M.; Nikita, K. S.; Pastorino, M.; Amanatiadis, A.; Zentai, G.

    2011-10-01

    This special feature on Imaging Systems and Techniques comprises 27 technical papers, covering essential facets in imaging systems and techniques both in theory and applications, from research groups spanning three different continents. It mainly contains peer-reviewed articles from the IEEE International Conference on Imaging Systems and Techniques (IST 2011), held in Thessaloniki, Greece, as well a number of articles relevant to the scope of this issue. The multifaceted field of imaging requires drastic adaptation to the rapid changes in our society, economy, environment, and the technological revolution; there is an urgent need to address and propose dynamic and innovative solutions to problems that tend to be either complex and static or rapidly evolving with a lot of unknowns. For instance, exploration of the engineering and physical principles of new imaging systems and techniques for medical applications, remote sensing, monitoring of space resources and enhanced awareness, exploration and management of natural resources, and environmental monitoring, are some of the areas that need to be addressed with urgency. Similarly, the development of efficient medical imaging techniques capable of providing physiological information at the molecular level is another important area of research. Advanced metabolic and functional imaging techniques, operating on multiple physical principles, using high resolution and high selectivity nanoimaging techniques, can play an important role in the diagnosis and treatment of cancer, as well as provide efficient drug-delivery imaging solutions for disease treatment with increased sensitivity and specificity. On the other hand, technical advances in the development of efficient digital imaging systems and techniques and tomographic devices operating on electric impedance tomography, computed tomography, single-photon emission and positron emission tomography detection principles are anticipated to have a significant impact on a

  5. Image Processing for Cameras with Fiber Bundle Image Relay

    DTIC Science & Technology

    length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors . However, such fiber-coupled imaging systems...coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image...vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with

  6. Quantitative assessment of dynamic PET imaging data in cancer imaging.

    PubMed

    Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E

    2012-11-01

    Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  8. Readout-Segmented Echo-Planar Imaging in Diffusion-Weighted MR Imaging in Breast Cancer: Comparison with Single-Shot Echo-Planar Imaging in Image Quality

    PubMed Central

    Kim, Yun Ju; Kang, Bong Joo; Park, Chang Suk; Kim, Hyeon Sook; Son, Yo Han; Porter, David Andrew; Song, Byung Joo

    2014-01-01

    Objective The purpose of this study was to compare the image quality of standard single-shot echo-planar imaging (ss-EPI) and that of readout-segmented EPI (rs-EPI) in patients with breast cancer. Materials and Methods Seventy-one patients with 74 breast cancers underwent both ss-EPI and rs-EPI. For qualitative comparison of image quality, three readers independently assessed the two sets of diffusion-weighted (DW) images. To evaluate geometric distortion, a comparison was made between lesion lengths derived from contrast enhanced MR (CE-MR) images and those obtained from the corresponding DW images. For assessment of image parameters, signal-to-noise ratio (SNR), lesion contrast, and contrast-to-noise ratio (CNR) were calculated. Results The rs-EPI was superior to ss-EPI in most criteria regarding the qualitative image quality. Anatomical structure distinction, delineation of the lesion, ghosting artifact, and overall image quality were significantly better in rs-EPI. Regarding the geometric distortion, lesion length on ss-EPI was significantly different from that of CE-MR, whereas there were no significant differences between CE-MR and rs-EPI. The rs-EPI was superior to ss-EPI in SNR and CNR. Conclusion Readout-segmented EPI is superior to ss-EPI in the aspect of image quality in DW MR imaging of the breast. PMID:25053898

  9. Imaging angiogenesis.

    PubMed

    Charnley, Natalie; Donaldson, Stephanie; Price, Pat

    2009-01-01

    There is a need for direct imaging of effects on tumor vasculature in assessment of response to antiangiogenic drugs and vascular disrupting agents. Imaging tumor vasculature depends on differences in permeability of vasculature of tumor and normal tissue, which cause changes in penetration of contrast agents. Angiogenesis imaging may be defined in terms of measurement of tumor perfusion and direct imaging of the molecules involved in angiogenesis. In addition, assessment of tumor hypoxia will give an indication of tumor vasculature. The range of imaging techniques available for these processes includes positron emission tomography (PET), dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), perfusion computed tomography (CT), and ultrasound (US).

  10. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  11. Categorizing biomedicine images using novel image features and sparse coding representation

    PubMed Central

    2013-01-01

    Background Images embedded in biomedical publications carry rich information that often concisely summarize key hypotheses adopted, methods employed, or results obtained in a published study. Therefore, they offer valuable clues for understanding main content in a biomedical publication. Prior studies have pointed out the potential of mining images embedded in biomedical publications for automatically understanding and retrieving such images' associated source documents. Within the broad area of biomedical image processing, categorizing biomedical images is a fundamental step for building many advanced image analysis, retrieval, and mining applications. Similar to any automatic categorization effort, discriminative image features can provide the most crucial aid in the process. Method We observe that many images embedded in biomedical publications carry versatile annotation text. Based on the locations of and the spatial relationships between these text elements in an image, we thus propose some novel image features for image categorization purpose, which quantitatively characterize the spatial positions and distributions of text elements inside a biomedical image. We further adopt a sparse coding representation (SCR) based technique to categorize images embedded in biomedical publications by leveraging our newly proposed image features. Results we randomly selected 990 images of the JPG format for use in our experiments where 310 images were used as training samples and the rest were used as the testing cases. We first segmented 310 sample images following the our proposed procedure. This step produced a total of 1035 sub-images. We then manually labeled all these sub-images according to the two-level hierarchical image taxonomy proposed by [1]. Among our annotation results, 316 are microscopy images, 126 are gel electrophoresis images, 135 are line charts, 156 are bar charts, 52 are spot charts, 25 are tables, 70 are flow charts, and the remaining 155 images are

  12. Evaluation of multimodality imaging using image fusion with ultrasound tissue elasticity imaging in an experimental animal model.

    PubMed

    Paprottka, P M; Zengel, P; Cyran, C C; Ingrisch, M; Nikolaou, K; Reiser, M F; Clevert, D A

    2014-01-01

    To evaluate the ultrasound tissue elasticity imaging by comparison to multimodality imaging using image fusion with Magnetic Resonance Imaging (MRI) and conventional grey scale imaging with additional elasticity-ultrasound in an experimental small-animal-squamous-cell carcinoma-model for the assessment of tissue morphology. Human hypopharynx carcinoma cells were subcutaneously injected into the left flank of 12 female athymic nude rats. After 10 days (SD ± 2) of subcutaneous tumor growth, sonographic grey scale including elasticity imaging and MRI measurements were performed using a high-end ultrasound system and a 3T MR. For image fusion the contrast-enhanced MRI DICOM data set was uploaded in the ultrasonic device which has a magnetic field generator, a linear array transducer (6-15 MHz) and a dedicated software package (GE Logic E9), that can detect transducers by means of a positioning system. Conventional grey scale and elasticity imaging were integrated in the image fusion examination. After successful registration and image fusion the registered MR-images were simultaneously shown with the respective ultrasound sectional plane. Data evaluation was performed using the digitally stored video sequence data sets by two experienced radiologist using a modified Tsukuba Elasticity score. The colors "red and green" are assigned for an area of soft tissue, "blue" indicates hard tissue. In all cases a successful image fusion and plan registration with MRI and ultrasound imaging including grey scale and elasticity imaging was possible. The mean tumor volume based on caliper measurements in 3 dimensions was ~323 mm3. 4/12 rats were evaluated with Score I, 5/12 rates were evaluated with Score II, 3/12 rates were evaluated with Score III. There was a close correlation in the fused MRI with existing small necrosis in the tumor. None of the scored II or III lesions was visible by conventional grey scale. The comparison of ultrasound tissue elasticity imaging enables a

  13. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Surry, K. J. M.; Mills, G. R.; Bevan, K.

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM andmore » US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target

  14. Image Viewer using Digital Imaging and Communications in Medicine (DICOM)

    NASA Astrophysics Data System (ADS)

    Baraskar, Trupti N.

    2010-11-01

    Digital Imaging and Communications in Medicine is a standard for handling, storing, printing, and transmitting information in medical imaging. The National Electrical Manufacturers Association holds the copyright to this standard. It was developed by the DICOM Standards committee. The other image viewers cannot collectively store the image details as well as the patient's information. So the image may get separated from the details, but DICOM file format stores the patient's information and the image details. Main objective is to develop a DICOM image viewer. The image viewer will open .dcm i.e. DICOM image file and also will have additional features such as zoom in, zoom out, black and white inverter, magnifier, blur, B/W inverter, horizontal and vertical flipping, sharpening, contrast, brightness and .gif converter are incorporated.

  15. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  16. Pediatric chest imaging.

    PubMed

    Gross, G W

    1992-10-01

    The highlight of recent articles published on pediatric chest imaging is the potential advantage of digital imaging of the infant's chest. Digital chest imaging allows accurate determination of functional residual capacity as well as manipulation of the image to highlight specific anatomic features. Reusable photostimulable phosphor imaging systems provide wide imaging latitude and lower patient dose. In addition, digital radiology permits multiple remote-site viewing on monitor displays. Several excellent reviews of the imaging features of various thoracic abnormalities and the application of newer imaging modalities, such as ultrafast CT and MR imaging to the pediatric chest, are additional highlights.

  17. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  18. Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging.

    PubMed

    Hong, Keehoon; Hong, Jisoo; Jung, Jae-Hyun; Park, Jae-Hyeung; Lee, Byoungho

    2010-05-24

    We propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice lines by projective image transformation. The information of distortion in the acquired elemental image set is found by Hough transform algorithm. With this initial information of distortions, the acquired elemental image set is rectified automatically without the prior knowledge on the characteristics of pickup system by stratified image transformation procedure. Computer-generated elemental image sets with distortion on purpose are used for verifying the proposed rectification method. Experimentally-captured elemental image sets are optically reconstructed before and after the rectification by the proposed method. The experimental results support the validity of the proposed method with high accuracy of image rectification and lattice extraction.

  19. Corner-point criterion for assessing nonlinear image processing imagers

    NASA Astrophysics Data System (ADS)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to

  20. Volumetric CT-images improve testing of radiological image interpretation skills.

    PubMed

    Ravesloot, Cécile J; van der Schaaf, Marieke F; van Schaik, Jan P J; ten Cate, Olle Th J; van der Gijp, Anouk; Mol, Christian P; Vincken, Koen L

    2015-05-01

    Current radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice. Two groups of medical students (n=139; n=143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students' test scores and reliabilities, measured with Cronbach's alpha, of 2D and volumetric CT-image tests were compared. Estimated reliabilities (Cronbach's alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p<.001). The volumetric CT-image testing program was considered user-friendly. This study shows that volumetric image questions can be successfully integrated in students' radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  2. Imaging investigations in Spine Trauma: The value of commonly used imaging modalities and emerging imaging modalities.

    PubMed

    Tins, Bernhard J

    2017-01-01

    Traumatic spine injuries can be devastating for patients affected and for health care professionals if preventable neurological deterioration occurs. This review discusses the imaging options for the diagnosis of spinal trauma. It lays out when imaging is appropriate and when it is not. It discusses strength and weakness of available imaging modalities. Advanced techniques for spinal injury imaging will be explored. The review concludes with a review of imaging protocols adjusted to clinical circumstances.

  3. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  4. Image processing on the image with pixel noise bits removed

    NASA Astrophysics Data System (ADS)

    Chuang, Keh-Shih; Wu, Christine

    1992-06-01

    Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.

  5. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    NASA Astrophysics Data System (ADS)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  6. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data

  7. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  8. Quantitative imaging features: extension of the oncology medical image database

    NASA Astrophysics Data System (ADS)

    Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.

    2015-03-01

    Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.

  9. Lucky Imaging: Improved Localization Accuracy for Single Molecule Imaging

    PubMed Central

    Cronin, Bríd; de Wet, Ben; Wallace, Mark I.

    2009-01-01

    We apply the astronomical data-analysis technique, Lucky imaging, to improve resolution in single molecule fluorescence microscopy. We show that by selectively discarding data points from individual single-molecule trajectories, imaging resolution can be improved by a factor of 1.6 for individual fluorophores and up to 5.6 for more complex images. The method is illustrated using images of fluorescent dye molecules and quantum dots, and the in vivo imaging of fluorescently labeled linker for activation of T cells. PMID:19348772

  10. An enhanced approach for biomedical image restoration using image fusion techniques

    NASA Astrophysics Data System (ADS)

    Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.

    2018-05-01

    Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.

  11. Image Quality in High-resolution and High-cadence Solar Imaging

    NASA Astrophysics Data System (ADS)

    Denker, C.; Dineva, E.; Balthasar, H.; Verma, M.; Kuckein, C.; Diercke, A.; González Manrique, S. J.

    2018-03-01

    Broad-band imaging and even imaging with a moderate bandpass (about 1 nm) provides a photon-rich environment, where frame selection (lucky imaging) becomes a helpful tool in image restoration, allowing us to perform a cost-benefit analysis on how to design observing sequences for imaging with high spatial resolution in combination with real-time correction provided by an adaptive optics (AO) system. This study presents high-cadence (160 Hz) G-band and blue continuum image sequences obtained with the High-resolution Fast Imager (HiFI) at the 1.5-meter GREGOR solar telescope, where the speckle-masking technique is used to restore images with nearly diffraction-limited resolution. The HiFI employs two synchronized large-format and high-cadence sCMOS detectors. The median filter gradient similarity (MFGS) image-quality metric is applied, among others, to AO-corrected image sequences of a pore and a small sunspot observed on 2017 June 4 and 5. A small region of interest, which was selected for fast-imaging performance, covered these contrast-rich features and their neighborhood, which were part of Active Region NOAA 12661. Modifications of the MFGS algorithm uncover the field- and structure-dependency of this image-quality metric. However, MFGS still remains a good choice for determining image quality without a priori knowledge, which is an important characteristic when classifying the huge number of high-resolution images contained in data archives. In addition, this investigation demonstrates that a fast cadence and millisecond exposure times are still insufficient to reach the coherence time of daytime seeing. Nonetheless, the analysis shows that data acquisition rates exceeding 50 Hz are required to capture a substantial fraction of the best seeing moments, significantly boosting the performance of post-facto image restoration.

  12. Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    DeSantis, Zachary J.

    Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.

  13. Optoelectronic imaging of speckle using image processing method

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  14. Image management research

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1988-01-01

    Two types of research issues are involved in image management systems with space station applications: image processing research and image perception research. The image processing issues are the traditional ones of digitizing, coding, compressing, storing, analyzing, and displaying, but with a new emphasis on the constraints imposed by the human perceiver. Two image coding algorithms have been developed that may increase the efficiency of image management systems (IMS). Image perception research involves a study of the theoretical and practical aspects of visual perception of electronically displayed images. Issues include how rapidly a user can search through a library of images, how to make this search more efficient, and how to present images in terms of resolution and split screens. Other issues include optimal interface to an IMS and how to code images in a way that is optimal for the human perceiver. A test-bed within which such issues can be addressed has been designed.

  15. Review methods for image segmentation from computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less

  16. Image Fusion During Vascular and Nonvascular Image-Guided Procedures☆

    PubMed Central

    Abi-Jaoudeh, Nadine; Kobeiter, Hicham; Xu, Sheng; Wood, Bradford J.

    2013-01-01

    Image fusion may be useful in any procedure where previous imaging such as positron emission tomography, magnetic resonance imaging, or contrast-enhanced computed tomography (CT) defines information that is referenced to the procedural imaging, to the needle or catheter, or to an ultrasound transducer. Fusion of prior and intraoperative imaging provides real-time feedback on tumor location or margin, metabolic activity, device location, or vessel location. Multimodality image fusion in interventional radiology was initially introduced for biopsies and ablations, especially for lesions only seen on arterial phase CT, magnetic resonance imaging, or positron emission tomography/CT but has more recently been applied to other vascular and nonvascular procedures. Two different types of platforms are commonly used for image fusion and navigation: (1) electromagnetic tracking and (2) cone-beam CT. Both technologies would be reviewed as well as their strengths and weaknesses, indications, when to use one vs the other, tips and guidance to streamline use, and early evidence defining clinical benefits of these rapidly evolving, commercially available and emerging techniques. PMID:23993079

  17. Magnetic resonance imaging based functional imaging in paediatric oncology.

    PubMed

    Manias, Karen A; Gill, Simrandip K; MacPherson, Lesley; Foster, Katharine; Oates, Adam; Peet, Andrew C

    2017-02-01

    Imaging is central to management of solid tumours in children. Conventional magnetic resonance imaging (MRI) is the standard imaging modality for tumours of the central nervous system (CNS) and limbs and is increasingly used in the abdomen. It provides excellent structural detail, but imparts limited information about tumour type, aggressiveness, metastatic potential or early treatment response. MRI based functional imaging techniques, such as magnetic resonance spectroscopy, diffusion and perfusion weighted imaging, probe tissue properties to provide clinically important information about metabolites, structure and blood flow. This review describes the role of and evidence behind these functional imaging techniques in paediatric oncology and implications for integrating them into routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. First Human Experience with Directly Image-able Iodinated Embolization Microbeads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, Elliot B., E-mail: levyeb@cc.nih.gov; Krishnasamy, Venkatesh P.; Lewis, Andrew L.

    PurposeTo describe first clinical experience with a directly image-able, inherently radio-opaque microspherical embolic agent for transarterial embolization of liver tumors.MethodologyLC Bead LUMI™ is a new product based upon sulfonate-modified polyvinyl alcohol hydrogel microbeads with covalently bound iodine (~260 mg I/ml). 70–150 μ LC Bead LUMI™ iodinated microbeads were injected selectively via a 2.8 Fr microcatheter to near complete flow stasis into hepatic arteries in three patients with hepatocellular carcinoma, carcinoid, or neuroendocrine tumor. A custom imaging platform tuned for LC LUMI™ microbead conspicuity using a cone beam CT (CBCT)/angiographic C-arm system (Allura Clarity FD20, Philips) was used along with CBCT embolization treatment planning software (EmboGuide,more » Philips).ResultsLC Bead LUMI™ image-able microbeads were easily delivered and monitored during the procedure using fluoroscopy, single-shot radiography (SSD), digital subtraction angiography (DSA), dual-phase enhanced and unenhanced CBCT, and unenhanced conventional CT obtained 48 h after the procedure. Intra-procedural imaging demonstrated tumor at risk for potential under-treatment, defined as paucity of image-able microbeads within a portion of the tumor which was confirmed at 48 h CT imaging. Fusion of pre- and post-embolization CBCT identified vessels without beads that corresponded to enhancing tumor tissue in the same location on follow-up imaging (48 h post).ConclusionLC Bead LUMI™ image-able microbeads provide real-time feedback and geographic localization of treatment in real time during treatment. The distribution and density of image-able beads within a tumor need further evaluation as an additional endpoint for embolization.« less

  19. BMC ecology image competition 2017: the winning images.

    PubMed

    Foote, Christopher; Darimont, Chris T; Baguette, Michel; Blanchet, Simon; Jacobus, Luke M; Mazzi, Dominique; Settele, Josef

    2017-08-18

    For the fifth year, BMC Ecology is proud to present the winning images from our annual image competition. The 2017 edition received entries by talented shutterbug-ecologists from across the world, showcasing research that is increasing our understanding of ecosystems worldwide and the beauty and diversity of life on our planet. In this editorial we showcase the winning images, as chosen by our Editorial Board and guest judge Chris Darimont, as well as our selection of highly commended images. Enjoy!

  20. Characteristics of composite images in multiview imaging and integral photography.

    PubMed

    Lee, Beom-Ryeol; Hwang, Jae-Jeong; Son, Jung-Young

    2012-07-20

    The compositions of images projected to a viewer's eyes from the various viewing regions of the viewing zone formed in one-dimensional integral photography (IP) and multiview imaging (MV) are identified. These compositions indicate that they are made up of pieces from different view images. Comparisons of the composite images with images composited at various regions of imaging space formed by camera arrays for multiview image acquisition reveal that the composite images do not involve any scene folding in the central viewing zone for either MV or IP. However, in the IP case, compositions from neighboring viewing regions aligned in the horizontal direction have reversed disparities, but in the viewing regions between the central and side viewing zones, no reversed disparities are expected. However, MV does exhibit them.

  1. Imaging Human Brain Perfusion with Inhaled Hyperpolarized 129Xe MR Imaging.

    PubMed

    Rao, Madhwesha R; Stewart, Neil J; Griffiths, Paul D; Norquay, Graham; Wild, Jim M

    2018-02-01

    Purpose To evaluate the feasibility of directly imaging perfusion of human brain tissue by using magnetic resonance (MR) imaging with inhaled hyperpolarized xenon 129 ( 129 Xe). Materials and Methods In vivo imaging with 129 Xe was performed in three healthy participants. The combination of a high-yield spin-exchange optical pumping 129 Xe polarizer, custom-built radiofrequency coils, and an optimized gradient-echo MR imaging protocol was used to achieve signal sensitivity sufficient to directly image hyperpolarized 129 Xe dissolved in the human brain. Conventional T1-weighted proton (hydrogen 1 [ 1 H]) images and perfusion images by using arterial spin labeling were obtained for comparison. Results Images of 129 Xe uptake were obtained with a signal-to-noise ratio of 31 ± 9 and demonstrated structural similarities to the gray matter distribution on conventional T1-weighted 1 H images and to perfusion images from arterial spin labeling. Conclusion Hyperpolarized 129 Xe MR imaging is an injection-free means of imaging the perfusion of cerebral tissue. The proposed method images the uptake of inhaled xenon gas to the extravascular brain tissue compartment across the intact blood-brain barrier. This level of sensitivity is not readily available with contemporary MR imaging methods. © RSNA, 2017.

  2. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  3. Test Image by Mars Descent Imager

    NASA Image and Video Library

    2010-07-19

    Ken Edgett, deputy principal investigator for NASA Mars Descent Imager, holds a ruler used as a depth-of-field test target. The instrument took this image inside the Malin Space Science Systems clean room in San Diego, CA, during calibration testing.

  4. ImagingSIMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-11-06

    ImagingSIMS is an open source application for loading, processing, manipulating and visualizing secondary ion mass spectrometry (SIMS) data. At PNNL, a separate branch has been further developed to incorporate application specific features for dynamic SIMS data sets. These include loading CAMECA IMS-1280, NanoSIMS and modified IMS-4f raw data, creating isotopic ratio images and stitching together images from adjacent interrogation regions. In addition to other modifications of the parent open source version, this version is equipped with a point-by-point image registration tool to assist with streamlining the image fusion process.

  5. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  6. Annotating images by mining image search results.

    PubMed

    Wang, Xin-Jing; Zhang, Lei; Li, Xirong; Ma, Wei-Ying

    2008-11-01

    Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search results. Some 2.4 million images with their surrounding text are collected from a few photo forums to support this approach. The entire process is formulated in a divide-and-conquer framework where a query keyword is provided along with the uncaptioned image to improve both the effectiveness and efficiency. This is helpful when the collected data set is not dense everywhere. In this sense, our approach contains three steps: 1) the search process to discover visually and semantically similar search results, 2) the mining process to identify salient terms from textual descriptions of the search results, and 3) the annotation rejection process to filter out noisy terms yielded by Step 2. To ensure real-time annotation, two key techniques are leveraged-one is to map the high-dimensional image visual features into hash codes, the other is to implement it as a distributed system, of which the search and mining processes are provided as Web services. As a typical result, the entire process finishes in less than 1 second. Since no training data set is required, our approach enables annotating with unlimited vocabulary and is highly scalable and robust to outliers. Experimental results on both real Web images and a benchmark image data set show the effectiveness and efficiency of the proposed algorithm. It is also worth noting that, although the entire approach is illustrated within the divide-and conquer framework, a query keyword is not crucial to our current implementation. We provide experimental results to prove this.

  7. Robust image registration for multiple exposure high dynamic range image synthesis

    NASA Astrophysics Data System (ADS)

    Yao, Susu

    2011-03-01

    Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..

  8. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

  9. 8-Bit Gray Scale Images of Fingerprint Image Groups

    National Institute of Standards and Technology Data Gateway

    NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (Web, free access)   The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  10. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  11. Implementing desktop image access of GI images

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Feingold, Eric R.; Horii, Steven C.; Laufer, Igor

    1996-05-01

    In this paper we present a specific example of the current state-of-the-art in desktop image access in the GI section of the Department of Radiology at the Hospital of the University of Pennsylvania. We describe a system which allows physicians to view and manipulate images from a Philips digital fluoroscopy system at the workstations in their offices. Typically they manipulate and view these images on their desktop Macs and then submit the results for slide making or save the images in digital teaching files. In addition to a discussion of the current state-of-the-art here at HUP, we also discuss some future directions that we are pursuing.

  12. Prior image constrained image reconstruction in emerging computed tomography applications

    NASA Astrophysics Data System (ADS)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation

  13. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  14. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  15. Quantitative image quality evaluation of MR images using perceptual difference models

    PubMed Central

    Miao, Jun; Huo, Donglai; Wilson, David L.

    2008-01-01

    The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was ≈1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM ≈ IDM (Sarnoff Corporation) ≈ SSIM [Wang et al. IEEE Trans. Image Process. 13, 600–612 (2004)] > mean squared error ≈ NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies. PMID:18649487

  16. Evaluation method based on the image correlation for laser jamming image

    NASA Astrophysics Data System (ADS)

    Che, Jinxi; Li, Zhongmin; Gao, Bo

    2013-09-01

    The jamming effectiveness evaluation of infrared imaging system is an important part of electro-optical countermeasure. The infrared imaging devices in the military are widely used in the searching, tracking and guidance and so many other fields. At the same time, with the continuous development of laser technology, research of laser interference and damage effect developed continuously, laser has been used to disturbing the infrared imaging device. Therefore, the effect evaluation of the infrared imaging system by laser has become a meaningful problem to be solved. The information that the infrared imaging system ultimately present to the user is an image, so the evaluation on jamming effect can be made from the point of assessment of image quality. The image contains two aspects of the information, the light amplitude and light phase, so the image correlation can accurately perform the difference between the original image and disturbed image. In the paper, the evaluation method of digital image correlation, the assessment method of image quality based on Fourier transform, the estimate method of image quality based on error statistic and the evaluation method of based on peak signal noise ratio are analysed. In addition, the advantages and disadvantages of these methods are analysed. Moreover, the infrared disturbing images of the experiment result, in which the thermal infrared imager was interfered by laser, were analysed by using these methods. The results show that the methods can better reflect the jamming effects of the infrared imaging system by laser. Furthermore, there is good consistence between evaluation results by using the methods and the results of subjective visual evaluation. And it also provides well repeatability and convenient quantitative analysis. The feasibility of the methods to evaluate the jamming effect was proved. It has some extent reference value for the studying and developing on electro-optical countermeasures equipments and

  17. A hierarchical SVG image abstraction layer for medical imaging

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer

    2010-03-01

    As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.

  18. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  19. Spectrographic imaging system

    DOEpatents

    Morris, Michael D.; Treado, Patrick J.

    1991-01-01

    An imaging system for providing spectrographically resolved images. The system incorporates a one-dimensional spatial encoding mask which enables an image to be projected onto a two-dimensional image detector after spectral dispersion of the image. The dimension of the image which is lost due to spectral dispersion on the two-dimensional detector is recovered through employing a reverse transform based on presenting a multiplicity of different spatial encoding patterns to the image. The system is especially adapted for detecting Raman scattering of monochromatic light transmitted through or reflected from physical samples. Preferably, spatial encoding is achieved through the use of Hadamard mask which selectively transmits or blocks portions of the image from the sample being evaluated.

  20. Performance evaluation of image segmentation algorithms on microscopic image data.

    PubMed

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  1. Image model: new perspective for image processing and computer vision

    NASA Astrophysics Data System (ADS)

    Ziou, Djemel; Allili, Madjid

    2004-05-01

    We propose a new image model in which the image support and image quantities are modeled using algebraic topology concepts. The image support is viewed as a collection of chains encoding combination of pixels grouped by dimension and linking different dimensions with the boundary operators. Image quantities are encoded using the notion of cochain which associates values for pixels of given dimension that can be scalar, vector, or tensor depending on the problem that is considered. This allows obtaining algebraic equations directly from the physical laws. The coboundary and codual operators, which are generic operations on cochains allow to formulate the classical differential operators as applied for field functions and differential forms in both global and local forms. This image model makes the association between the image support and the image quantities explicit which results in several advantages: it allows the derivation of efficient algorithms that operate in any dimension and the unification of mathematics and physics to solve classical problems in image processing and computer vision. We show the effectiveness of this model by considering the isotropic diffusion.

  2. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  3. Sonorous images through digital holographic images

    NASA Astrophysics Data System (ADS)

    Azevedo, Isabel; Sandford-Richardson, Elizabeth

    2017-03-01

    The art of the last fifty years has significantly surrounded the presence of the body, the relationship between human and interactive technologies. Today in interactive art, there are not only representations that speak of the body but actions and behaviours that involve the body. In holography, the image appears and disappears from the observer's vision field; because the holographic image is light, we can see multidimensional spaces, shapes and colours existing on the same time, presence and absence of the image on the holographic plate. And the image can be flowing in front of the plate that sometimes people try touching it with his hands. That means, to the viewer will be interactive events, with no beginning or end that can be perceived in any direction, forward or backward, depending on the relative position and the time the viewer spends in front of the hologram. To explore that feature we are proposing an installation with four holograms, and several sources of different kind of sounds connected with each hologram. When viewers will move in front of each hologram they will activate different sources of sound. The search is not only about the images in the holograms, but also the looking for different types of sounds that this demand will require. The digital holograms were produced using the HoloCam Portable Light System with the 35 mm camera Canon 700D to capture image information, it was then edited on computer using the Motion 5 and Final Cut Pro X programs.

  4. Image barcodes

    NASA Astrophysics Data System (ADS)

    Damera-Venkata, Niranjan; Yen, Jonathan

    2003-01-01

    A Visually significant two-dimensional barcode (VSB) developed by Shaked et. al. is a method used to design an information carrying two-dimensional barcode, which has the appearance of a given graphical entity such as a company logo. The encoding and decoding of information using the VSB, uses a base image with very few graylevels (typically only two). This typically requires the image histogram to be bi-modal. For continuous-tone images such as digital photographs of individuals, the representation of tone or "shades of gray" is not only important to obtain a pleasing rendition of the face, but in most cases, the VSB renders these images unrecognizable due to its inability to represent true gray-tone variations. This paper extends the concept of a VSB to an image bar code (IBC). We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base-images such as those acquired with a digital camera. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. The IBC supports a high information capacity that differentiates it from common hardcopy watermarks. The reason for the improved image quality over the VSB is a joint encoding/halftoning strategy based on a modified version of block error diffusion. Encoder stability, image quality vs. information capacity tradeoffs and decoding issues with and without explicit knowledge of the base-image are discussed.

  5. Improved Contrast-Enhanced Ultrasound Imaging With Multiplane-Wave Imaging.

    PubMed

    Gong, Ping; Song, Pengfei; Chen, Shigao

    2018-02-01

    Contrast-enhanced ultrasound (CEUS) imaging has great potential for use in new ultrasound clinical applications such as myocardial perfusion imaging and abdominal lesion characterization. In CEUS imaging, contrast agents (i.e., microbubbles) are used to improve contrast between blood and tissue because of their high nonlinearity under low ultrasound pressure. However, the quality of CEUS imaging sometimes suffers from a low signal-to-noise ratio (SNR) in deeper imaging regions when a low mechanical index (MI) is used to avoid microbubble disruption, especially for imaging at off-resonance transmit frequencies. In this paper, we propose a new strategy of combining CEUS sequences with the recently proposed multiplane-wave (MW) compounding method to improve the SNR of CEUS in deeper imaging regions without increasing MI or sacrificing frame rate. The MW-CEUS method emits multiple Hadamard-coded CEUS pulses in each transmission event (i.e., pulse-echo event). The received echo signals first undergo fundamental bandpass filtering (i.e., the filter is centered on the transmit frequency) to eliminate the microbubble's second-harmonic signals because they cannot be encoded by pulse inversion. The filtered signals are then Hadamard decoded and realigned in fast time to recover the signals as they would have been obtained using classic CEUS pulses, followed by designed recombination to cancel the linear tissue responses. The MW-CEUS method significantly improved contrast-to-tissue ratio and SNR of CEUS imaging by transmitting longer coded pulses. The image resolution was also preserved. The microbubble disruption ratio and motion artifacts in MW-CEUS were similar to those of classic CEUS imaging. In addition, the MW-CEUS sequence can be adapted to other transmission coding formats. These properties of MW-CEUS can potentially facilitate CEUS imaging for many clinical applications, especially assessing deep abdominal organs or the heart.

  6. Using consumer-grade devices for multi-imager non-contact imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2017-02-01

    Imaging photoplethysmography is a technique through which the morphology of the blood volume pulse can be obtained through non-contact video recordings of exposed skin with superficial vasculature. The acceptance of such a convenient modality for use in everyday applications may well depend upon the availability of consumer-grade imagers that facilitate ease-of-adoption. Multiple imagers have been used previously in concept demonstrations, showing improvements in quality of the extracted blood volume pulse signal. However, the use of multi-imager sensors requires synchronization of the frame exposures between the individual imagers, a capability that has only recently been available without creating custom solutions. In this work, we consider the use of multiple, commercially-available, synchronous imagers for use in imaging photoplethysmography. A commercially-available solution for adopting multi-imager synchronization was analyzed for 21 stationary, seated participants while ground-truth physiological signals were simultaneously measured. A total of three imagers were used, facilitating a comparison between fused data from all three imagers versus data from the single, central imager in the array. The within-subjects design included analyses of pulse rate and pulse signal-to-noise ratio. Using the fused data from the triple-imager array, mean absolute error in pulse rate measurement was reduced to 3.8 as compared to 7.4 beats per minute with the single imager. While this represents an overall improvement in the multi-imager case, it is also noted that these errors are substantially higher than those obtained in comparable studies. We further discuss these results and their implications for using readily-available commercial imaging solutions for imaging photoplethysmography applications.

  7. Super Resolution Imaging Applied to Scientific Images

    DTIC Science & Technology

    2007-05-01

    norm has found favor in the image restoration community because it allows discontinuities in its solution. As opposed to the L2 norm it does not...Oxford University Press. 31) Malay Kumar Nema , S.Rakshit and S.Chaudhuri,”Edge Model Based High Resolution Image Genration”Indian Conference on...Society of America, vol. 11, no. 2, pp. 572- 579, February 1994 37) M. Nema , S. Rakshit and S. Chaudhuri, ``Edge Model Based High Resolution Image

  8. Imaging Atherosclerosis

    PubMed Central

    Tarkin, Jason M.; Dweck, Marc R.; Evans, Nicholas R.; Takx, Richard A.P.; Brown, Adam J.; Tawakol, Ahmed; Fayad, Zahi A.

    2016-01-01

    Advances in atherosclerosis imaging technology and research have provided a range of diagnostic tools to characterize high-risk plaque in vivo; however, these important vascular imaging methods additionally promise great scientific and translational applications beyond this quest. When combined with conventional anatomic- and hemodynamic-based assessments of disease severity, cross-sectional multimodal imaging incorporating molecular probes and other novel noninvasive techniques can add detailed interrogation of plaque composition, activity, and overall disease burden. In the catheterization laboratory, intravascular imaging provides unparalleled access to the world beneath the plaque surface, allowing tissue characterization and measurement of cap thickness with micrometer spatial resolution. Atherosclerosis imaging captures key data that reveal snapshots into underlying biology, which can test our understanding of fundamental research questions and shape our approach toward patient management. Imaging can also be used to quantify response to therapeutic interventions and ultimately help predict cardiovascular risk. Although there are undeniable barriers to clinical translation, many of these hold-ups might soon be surpassed by rapidly evolving innovations to improve image acquisition, coregistration, motion correction, and reduce radiation exposure. This article provides a comprehensive review of current and experimental atherosclerosis imaging methods and their uses in research and potential for translation to the clinic. PMID:26892971

  9. Imaging Transgene Expression with Radionuclide Imaging Technologies1

    PubMed Central

    Gambhir, SS; Herschman, HR; Cherry, SR; Barrio, JR; Satyamurthy, N; Toyokuni, T; Phelps, ME; Larson, SM; Balaton, J; Finn, R; Sadelain, M; Tjuvajev, J

    2000-01-01

    Abstract A variety of imaging technologies are being investigated as tools for studying gene expression in living subjects. Noninvasive, repetitive and quantitative imaging of gene expression will help both to facilitate human gene therapy trials and to allow for the study of animal models of molecular and cellular therapy. Radionuclide approaches using single photon emission computed tomography (SPECT) and positron emission tomography (PET) are the most mature of the current imaging technologies and offer many advantages for imaging gene expression compared to optical and magnetic resonance imaging (MRI)-based approaches. These advantages include relatively high sensitivity, full quantitative capability (for PET), and the ability to extend small animal assays directly into clinical human applications. We describe a PET scanner (micro PET) designed specifically for studies of small animals. We review “marker/reporter gene” imaging approaches using the herpes simplex type 1 virus thymidine kinase (HSV1-tk) and the dopamine type 2 receptor (D2R) genes. We describe and contrast several radiolabeled probes that can be used with the HSV1-tk reporter gene both for SPECT and for PET imaging. We also describe the advantages/disadvantages of each of the assays developed and discuss future animal and human applications. PMID:10933072

  10. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  11. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  12. Image quality characteristics for virtual monoenergetic images using dual-layer spectral detector CT: Comparison with conventional tube-voltage images.

    PubMed

    Sakabe, Daisuke; Funama, Yoshinori; Taguchi, Katsuyuki; Nakaura, Takeshi; Utsunomiya, Daisuke; Oda, Seitaro; Kidoh, Masafumi; Nagayama, Yasunori; Yamashita, Yasuyuki

    2018-05-01

    To investigate the image quality characteristics for virtual monoenergetic images compared with conventional tube-voltage image with dual-layer spectral CT (DLCT). Helical scans were performed using a first-generation DLCT scanner, two different sizes of acrylic cylindrical phantoms, and a Catphan phantom. Three different iodine concentrations were inserted into the phantom center. The single-tube voltage for obtaining virtual monoenergetic images was set to 120 or 140 kVp. Conventional 120- and 140-kVp images and virtual monoenergetic images (40-200-keV images) were reconstructed from slice thicknesses of 1.0 mm. The CT number and image noise were measured for each iodine concentration and water on the 120-kVp images and virtual monoenergetic images. The noise power spectrum (NPS) was also calculated. The iodine CT numbers for the iodinated enhancing materials were similar regardless of phantom size and acquisition method. Compared with the iodine CT numbers of the conventional 120-kVp images, those for the monoenergetic 40-, 50-, and 60-keV images increased by approximately 3.0-, 1.9-, and 1.3-fold, respectively. The image noise values for each virtual monoenergetic image were similar (for example, 24.6 HU at 40 keV and 23.3 HU at 200 keV obtained at 120 kVp and 30-cm phantom size). The NPS curves of the 70-keV and 120-kVp images for a 1.0-mm slice thickness over the entire frequency range were similar. Virtual monoenergetic images represent stable image noise over the entire energy spectrum and improved the contrast-to-noise ratio than conventional tube voltage using the dual-layer spectral detector CT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Image Guidance

    EPA Pesticide Factsheets

    Guidance that explains the process for getting images approved in One EPA Web microsites and resource directories. includes an appendix that shows examples of what makes some images better than others, how some images convey meaning more than others

  14. EDITORIAL: Imaging systems and techniques Imaging systems and techniques

    NASA Astrophysics Data System (ADS)

    Yang, Wuqiang; Giakos, George; Nikita, Konstantina; Pastorino, Matteo; Karras, Dimitrios

    2009-10-01

    The papers in this special issue focus on providing the state-of-the-art approaches and solutions to some of the most challenging imaging areas, such as the design, development, evaluation and applications of imaging systems, measuring techniques, image processing algorithms and instrumentation, with an ultimate aim of enhancing the measurement accuracy and image quality. This special issue explores the principles, engineering developments and applications of new imaging systems and techniques, and encourages broad discussion of imaging methodologies, shaping the future and identifying emerging trends. The multi-faceted field of imaging requires drastic adaptation to the rapid changes in our society, economy, environment and technological evolution. There is an urgent need to address new problems, which tend to be either static but complex, or dynamic, e.g. rapidly evolving with time, with many unknowns, and to propose innovative solutions. For instance, the battles against cancer and terror, monitoring of space resources and enhanced awareness, management of natural resources and environmental monitoring are some of the areas that need to be addressed. The complexity of the involved imaging scenarios and demanding design parameters, e.g. speed, signal-to-noise ratio (SNR), specificity, contrast, spatial resolution, scatter rejection, complex background and harsh environments, necessitate the development of a multi-functional, scalable and efficient imaging suite of sensors, solutions driven by innovation, and operation on diverse detection and imaging principles. Efficient medical imaging techniques capable of providing physiological information at the molecular level present another important research area. Advanced metabolic and functional imaging techniques, operating on multiple physical principles, and using high-resolution, high-selectivity nano-imaging methods, quantum dots, nanoparticles, biomarkers, nanostructures, nanosensors, micro-array imaging chips

  15. Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology.

    PubMed

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.

  16. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  17. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  18. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  19. Image analysis and modeling in medical image computing. Recent developments and advances.

    PubMed

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  20. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    NASA Astrophysics Data System (ADS)

    McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.

    2017-06-01

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  1. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images.

    PubMed

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; Connell, Dylan O'; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-06-07

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  2. Medical Image Databases

    PubMed Central

    Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James

    1997-01-01

    Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338

  3. Statistical characterization of portal images and noise from portal imaging systems.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge

    2013-06-01

    In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.

  4. Imaging Oncogene Expression

    PubMed Central

    Mukherjee, Archana; Wickstrom, Eric

    2009-01-01

    This review briefly outlines the importance of molecular imaging, particularly imaging of endogenous gene expression for noninvasive genetic analysis of radiographic masses. The concept of antisense imaging agents and the advantages and challenges in the development of hybridization probes for in vivo imaging are described. An overview of the investigations on oncogene expression imaging is given. Finally, the need for further improvement in antisense-based imaging agents and directions to improve oncogene mRNA targeting is stated. PMID:19264436

  5. Imaging efficacy of a targeted imaging agent for fluorescence endoscopy

    NASA Astrophysics Data System (ADS)

    Healey, A. J.; Bendiksen, R.; Attramadal, T.; Bjerke, R.; Waagene, S.; Hvoslef, A. M.; Johannesen, E.

    2008-02-01

    Colorectal cancer is a major cause of cancer death. A significant unmet clinical need exists in the area of screening for earlier and more accurate diagnosis and treatment. We have identified a fluorescence imaging agent targeted to an early stage molecular marker for colorectal cancer. The agent is administered intravenously and imaged in a far red imaging channel as an adjunct to white light endoscopy. There is experimental evidence of preclinical proof of mechanism for the agent. In order to assess potential clinical efficacy, imaging was performed with a prototype fluorescence endoscope system designed to produce clinically relevant images. A clinical laparoscope system was modified for fluorescence imaging. The system was optimised for sensitivity. Images were recorded at settings matching those expected with a clinical endoscope implementation (at video frame rate operation). The animal model was comprised of a HCT-15 xenograft tumour expressing the target at concentration levels expected in early stage colorectal cancer. Tumours were grown subcutaneously. The imaging agent was administered intravenously at a dose of 50nmol/kg body weight. The animals were killed 2 hours post administration and prepared for imaging. A 3-4mm diameter, 1.6mm thick slice of viable tumour was placed over the opened colon and imaged with the laparoscope system. A receiver operator characteristic analysis was applied to imaging results. An area under the curve of 0.98 and a sensitivity of 87% [73, 96] and specificity of 100% [93, 100] were obtained.

  6. A Sensitive TLRH Targeted Imaging Technique for Ultrasonic Molecular Imaging

    PubMed Central

    Hu, Xiaowen; Zheng, Hairong; Kruse, Dustin E.; Sutcliffe, Patrick; Stephens, Douglas N.; Ferrara, Katherine W.

    2010-01-01

    The primary goals of ultrasound molecular imaging are the detection and imaging of ultrasound contrast agents (microbubbles), which are bound to specific vascular surface receptors. Imaging methods that can sensitively and selectively detect and distinguish bound microbubbles from freely circulating microbubbles (free microbubbles) and surrounding tissue are critically important for the practical application of ultrasound contrast molecular imaging. Microbubbles excited by low frequency acoustic pulses emit wide-band echoes with a bandwidth extending beyond 20 MHz; we refer to this technique as TLRH (transmission at a low frequency and reception at a high frequency). Using this wideband, transient echo, we have developed and implemented a targeted imaging technique incorporating a multi-frequency co-linear array and the Siemens Antares® imaging system. The multi-frequency co-linear array integrates a center 5.4 MHz array, used to receive echoes and produce radiation force, and two outer 1.5 MHz arrays used to transmit low frequency incident pulses. The targeted imaging technique makes use of an acoustic radiation force sub-sequence to enhance accumulation and a TLRH imaging sub-sequence to detect bound microbubbles. The radiofrequency (RF) data obtained from the TLRH imaging sub-sequence are processsed to separate echo signatures between tissue, free microbubbles, and bound microbubbles. By imaging biotin-coated microbubbles targeted to avidin-coated cellulose tubes, we demonstrate that the proposed method has a high contrast-to-tissue ratio (up to 34 dB) and a high sensitivity to bound microbubbles (with the ratio of echoes from bound microbubbles versus free microbubbles extending up to 23 dB). The effects of the imaging pulse acoustic pressure, the radiation force sub-sequence and the use of various slow-time filters on the targeted imaging quality are studied. The TLRH targeted imaging method is demonstrated in this study to provide sensitive and selective

  7. Mass density images from the diffraction enhanced imaging technique.

    PubMed

    Hasnah, M O; Parham, C; Pisano, E D; Zhong, Z; Oltulu, O; Chapman, D

    2005-02-01

    Conventional x-ray radiography measures the projected x-ray attenuation of an object. It requires attenuation differences to obtain contrast of embedded features. In general, the best absorption contrast is obtained at x-ray energies where the absorption is high, meaning a high absorbed dose. Diffraction-enhanced imaging (DEI) derives contrast from absorption, refraction, and extinction. The refraction angle image of DEI visualizes the spatial gradient of the projected electron density of the object. The projected electron density often correlates well with the projected mass density and projected absorption in soft-tissue imaging, yet the mass density is not an "energy"-dependent property of the object, as is the case of absorption. This simple difference can lead to imaging with less x-ray exposure or dose. In addition, the mass density image can be directly compared (i.e., a signal-to-noise comparison) with conventional radiography. We present the method of obtaining the mass density image, the results of experiments in which comparisons are made with radiography, and an application of the method to breast cancer imaging.

  8. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  9. An excitation wavelength-scanning spectral imaging system for preclinical imaging

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Rajwa, Bartek; Robinson, J. Paul

    2008-02-01

    Small-animal fluorescence imaging is a rapidly growing field, driven by applications in cancer detection and pharmaceutical therapies. However, the practical use of this imaging technology is limited by image-quality issues related to autofluorescence background from animal tissues, as well as attenuation of the fluorescence signal due to scatter and absorption. To combat these problems, spectral imaging and analysis techniques are being employed to separate the fluorescence signal from background autofluorescence. To date, these technologies have focused on detecting the fluorescence emission spectrum at a fixed excitation wavelength. We present an alternative to this technique, an imaging spectrometer that detects the fluorescence excitation spectrum at a fixed emission wavelength. The advantages of this approach include increased available information for discrimination of fluorescent dyes, decreased optical radiation dose to the animal, and ability to scan a continuous wavelength range instead of discrete wavelength sampling. This excitation-scanning imager utilizes an acousto-optic tunable filter (AOTF), with supporting optics, to scan the excitation spectrum. Advanced image acquisition and analysis software has also been developed for classification and unmixing of the spectral image sets. Filtering has been implemented in a single-pass configuration with a bandwidth (full width at half maximum) of 16nm at 550nm central diffracted wavelength. We have characterized AOTF filtering over a wide range of incident light angles, much wider than has been previously reported in the literature, and we show how changes in incident light angle can be used to attenuate AOTF side lobes and alter bandwidth. A new parameter, in-band to out-of-band ratio, was defined to assess the quality of the filtered excitation light. Additional parameters were measured to allow objective characterization of the AOTF and the imager as a whole. This is necessary for comparing the

  10. Development of image mappers for hyperspectral biomedical imaging applications

    PubMed Central

    Kester, Robert T.; Gao, Liang; Tkaczyk, Tomasz S.

    2010-01-01

    A new design and fabrication method is presented for creating large-format (>100 mirror facets) image mappers for a snapshot hyperspectral biomedical imaging system called an image mapping spectrometer (IMS). To verify this approach a 250 facet image mapper with 25 multiple-tilt angles is designed for a compact IMS that groups the 25 subpupils in a 5 × 5 matrix residing within a single collecting objective's pupil. The image mapper is fabricated by precision diamond raster fly cutting using surface-shaped tools. The individual mirror facets have minimal edge eating, tilt errors of <1 mrad, and an average roughness of 5.4 nm. PMID:20357875

  11. Biomedical photoacoustic imaging

    PubMed Central

    Beard, Paul

    2011-01-01

    Photoacoustic (PA) imaging, also called optoacoustic imaging, is a new biomedical imaging modality based on the use of laser-generated ultrasound that has emerged over the last decade. It is a hybrid modality, combining the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging. In essence, a PA image can be regarded as an ultrasound image in which the contrast depends not on the mechanical and elastic properties of the tissue, but its optical properties, specifically optical absorption. As a consequence, it offers greater specificity than conventional ultrasound imaging with the ability to detect haemoglobin, lipids, water and other light-absorbing chomophores, but with greater penetration depth than purely optical imaging modalities that rely on ballistic photons. As well as visualizing anatomical structures such as the microvasculature, it can also provide functional information in the form of blood oxygenation, blood flow and temperature. All of this can be achieved over a wide range of length scales from micrometres to centimetres with scalable spatial resolution. These attributes lend PA imaging to a wide variety of applications in clinical medicine, preclinical research and basic biology for studying cancer, cardiovascular disease, abnormalities of the microcirculation and other conditions. With the emergence of a variety of truly compelling in vivo images obtained by a number of groups around the world in the last 2–3 years, the technique has come of age and the promise of PA imaging is now beginning to be realized. Recent highlights include the demonstration of whole-body small-animal imaging, the first demonstrations of molecular imaging, the introduction of new microscopy modes and the first steps towards clinical breast imaging being taken as well as a myriad of in vivo preclinical imaging studies. In this article, the underlying physical principles of the technique, its practical

  12. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  13. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  14. Extracting flat-field images from scene-based image sequences using phase correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caron, James N., E-mail: Caron@RSImd.com; Montes, Marcos J.; Obermark, Jerome L.

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method usesmore » sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.« less

  15. Image processing based detection of lung cancer on CT scan images

    NASA Astrophysics Data System (ADS)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  16. Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.

    2016-02-01

    Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.

  17. The effect of image sharpness on quantitative eye movement data and on image quality evaluation while viewing natural images

    NASA Astrophysics Data System (ADS)

    Vuori, Tero; Olkkonen, Maria

    2006-01-01

    The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.

  18. Designing Image Operators for MRI-PET Image Fusion of the Brain

    NASA Astrophysics Data System (ADS)

    Márquez, Jorge; Gastélum, Alfonso; Padilla, Miguel A.

    2006-09-01

    Our goal is to obtain images combining in a useful and precise way the information from 3D volumes of medical imaging sets. We address two modalities combining anatomy (Magnetic Resonance Imaging or MRI) and functional information (Positron Emission Tomography or PET). Commercial imaging software offers image fusion tools based on fixed blending or color-channel combination of two modalities, and color Look-Up Tables (LUTs), without considering the anatomical and functional character of the image features. We used a sensible approach for image fusion taking advantage mainly from the HSL (Hue, Saturation and Luminosity) color space, in order to enhance the fusion results. We further tested operators for gradient and contour extraction to enhance anatomical details, plus other spatial-domain filters for functional features corresponding to wide point-spread-function responses in PET images. A set of image-fusion operators was formulated and tested on PET and MRI acquisitions.

  19. Automated designation of tie-points for image-to-image coregistration.

    Treesearch

    R.E. Kennedy; W.B. Cohen

    2003-01-01

    Image-to-image registration requires identification of common points in both images (image tie-points: ITPs). Here we describe software implementing an automated, area-based technique for identifying ITPs. The ITP software was designed to follow two strategies: ( I ) capitalize on human knowledge and pattern recognition strengths, and (2) favour robustness in many...

  20. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  1. How much image noise can be added in cardiac x-ray imaging without loss in perceived image quality?

    NASA Astrophysics Data System (ADS)

    Gislason-Lee, Amber J.; Kumcu, Asli; Kengyelics, Stephen M.; Rhodes, Laura A.; Davies, Andrew G.

    2015-03-01

    Dynamic X-ray imaging systems are used for interventional cardiac procedures to treat coronary heart disease. X-ray settings are controlled automatically by specially-designed X-ray dose control mechanisms whose role is to ensure an adequate level of image quality is maintained with an acceptable radiation dose to the patient. Current commonplace dose control designs quantify image quality by performing a simple technical measurement directly from the image. However, the utility of cardiac X-ray images is in their interpretation by a cardiologist during an interventional procedure, rather than in a technical measurement. With the long term goal of devising a clinically-relevant image quality metric for an intelligent dose control system, we aim to investigate the relationship of image noise with clinical professionals' perception of dynamic image sequences. Computer-generated noise was added, in incremental amounts, to angiograms of five different patients selected to represent the range of adult cardiac patient sizes. A two alternative forced choice staircase experiment was used to determine the amount of noise which can be added to a patient image sequences without changing image quality as perceived by clinical professionals. Twenty-five viewing sessions (five for each patient) were completed by thirteen observers. Results demonstrated scope to increase the noise of cardiac X-ray images by up to 21% +/- 8% before it is noticeable by clinical professionals. This indicates a potential for 21% radiation dose reduction since X-ray image noise and radiation dose are directly related; this would be beneficial to both patients and personnel.

  2. Hip Imaging in Athletes: Sports Imaging Series.

    PubMed

    Agten, Christoph A; Sutter, Reto; Buck, Florian M; Pfirrmann, Christian W A

    2016-08-01

    Hip or groin pain in athletes is common and clinical presentation is often nonspecific. Imaging is a very important diagnostic step in the work-up of athletes with hip pain. This review article provides an overview on hip biomechanics and discusses strategies for hip imaging modalities such as radiography, ultrasonography, computed tomography, and magnetic resonance (MR) imaging (MR arthrography and traction MR arthrography). The authors explain current concepts of femoroacetabular impingement and the problem of high prevalence of cam- and pincer-type morphology in asymptomatic persons. With the main focus on MR imaging, the authors present abnormalities of the hip joint and the surrounding soft tissues that can occur in athletes: intraarticular and extraarticular hip impingement syndromes, labral and cartilage disease, microinstability of the hip, myotendinous injuries, and athletic pubalgia. (©) RSNA, 2016.

  3. Image degradation characteristics and restoration based on regularization for diffractive imaging

    NASA Astrophysics Data System (ADS)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  4. Evaluation of image quality in terahertz pulsed imaging using test objects.

    PubMed

    Fitzgerald, A J; Berry, E; Miles, R E; Zinovev, N N; Smith, M A; Chamberlain, J M

    2002-11-07

    As with other imaging modalities, the performance of terahertz (THz) imaging systems is limited by factors of spatial resolution, contrast and noise. The purpose of this paper is to introduce test objects and image analysis methods to evaluate and compare THz image quality in a quantitative and objective way, so that alternative terahertz imaging system configurations and acquisition techniques can be compared, and the range of image parameters can be assessed. Two test objects were designed and manufactured, one to determine the modulation transfer functions (MTF) and the other to derive image signal to noise ratio (SNR) at a range of contrasts. As expected the higher THz frequencies had larger MTFs, and better spatial resolution as determined by the spatial frequency at which the MTF dropped below the 20% threshold. Image SNR was compared for time domain and frequency domain image parameters and time delay based images consistently demonstrated higher SNR than intensity based parameters such as relative transmittance because the latter are more strongly affected by the sources of noise in the THz system such as laser fluctuations and detector shot noise.

  5. A dual-view digital tomosynthesis imaging technique for improved chest imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng

    Purpose: Digital tomosynthesis (DTS) has been shown to be useful for reducing the overlapping of abnormalities with anatomical structures at various depth levels along the posterior–anterior (PA) direction in chest radiography. However, DTS provides crude three-dimensional (3D) images that have poor resolution in the lateral view and can only be displayed with reasonable quality in the PA view. Furthermore, the spillover of high-contrast objects from off-fulcrum planes generates artifacts that may impede the diagnostic use of the DTS images. In this paper, the authors describe and demonstrate the use of a dual-view DTS technique to improve the accuracy of themore » reconstructed volume image data for more accurate rendition of the anatomy and slice images with improved resolution and reduced artifacts, thus allowing the 3D image data to be viewed in views other than the PA view. Methods: With the dual-view DTS technique, limited angle scans are performed and projection images are acquired in two orthogonal views: PA and lateral. The dual-view projection data are used together to reconstruct 3D images using the maximum likelihood expectation maximization iterative algorithm. In this study, projection images were simulated or experimentally acquired over 360° using the scanning geometry for cone beam computed tomography (CBCT). While all projections were used to reconstruct CBCT images, selected projections were extracted and used to reconstruct single- and dual-view DTS images for comparison with the CBCT images. For realistic demonstration and comparison, a digital chest phantom derived from clinical CT images was used for the simulation study. An anthropomorphic chest phantom was imaged for the experimental study. The resultant dual-view DTS images were visually compared with the single-view DTS images and CBCT images for the presence of image artifacts and accuracy of CT numbers and anatomy and quantitatively compared with root-mean-square-deviation (RMSD

  6. PEGylated Peptide-Based Imaging Agents for Targeted Molecular Imaging.

    PubMed

    Wu, Huizi; Huang, Jiaguo

    2016-01-01

    Molecular imaging is able to directly visualize targets and characterize cellular pathways with a high signal/background ratio, which requires a sufficient amount of agents to uptake and accumulate in the imaging area. The design and development of peptide based agents for imaging and diagnosis as a hot and promising research topic that is booming in the field of molecular imaging. To date, selected peptides have been increasingly developed as agents by coupling with different imaging moieties (such as radiometals and fluorophore) with the help of sophisticated chemical techniques. Although a few successes have been achieved, most of them have failed mainly caused by their fast renal clearance and therefore low tumor uptakes, which may limit the effectively tumor retention effect. Besides, several peptide agents based on nanoparticles have also been developed for medical diagnostics. However, a great majority of those agents shown long circulation times and accumulation over time into the reticuloendothelial system (RES; including spleen, liver, lymph nodes and bone marrow) after systematic administration, such long-term severe accumulation probably results in the possible likelihood of toxicity and potentially induces health hazards. Recently reported design criteria have been proposed not only to enhance binding affinity in tumor region with long retention, but also to improve clearance from the body in a reasonable amount of time. PEGylation has been considered as one of the most successful modification methods to prolong tumor retention and improve the pharmacokinetic and pharmacodynamic properties for peptide-based imaging agents. This review summarizes an overview of PEGylated peptides imaging agents based on different imaging moieties including radioisotopes, fluorophores, and nanoparticles. The unique concepts and applications of various PEGylated peptide-based imaging agents are introduced for each of several imaging moieties. Effects of PEGylation on

  7. Clinical Amyloid Imaging.

    PubMed

    Mallik, Atul; Drzezga, Alex; Minoshima, Satoshi

    2017-01-01

    Amyloid plaques, along with neurofibrillary tangles, are a neuropathologic hallmark of Alzheimer disease (AD). Recently, amyloid PET radiotracers have been developed and approved for clinical use in the evaluation of suspected neurodegenerative disorders. In both research and clinical settings, amyloid PET imaging has provided important diagnostic and prognostic information for the management of patients with possible AD, mild cognitive impairment (MCI), and other challenging diagnostic presentations. Although the overall impact of amyloid imaging is still being evaluated, the Society of Nuclear Medicine and Molecular Imaging and Alzheimer's Association Amyloid Imaging Task Force have created appropriate use criteria for the standard clinical use of amyloid PET imaging. By the appropriate use criteria, amyloid imaging is appropriate for patients with (1) persistent or unexplained MCI, (2) AD as a possible but still uncertain diagnosis after expert evaluation and (3) atypically early-age-onset progressive dementia. To better understand the clinical and economic effect of amyloid imaging, the Imaging Dementia-Evidence for Amyloid Scanning (IDEAS) study is an ongoing large multicenter study in the United States, which is evaluating how amyloid imaging affects diagnosis, management, and outcomes for cognitively impaired patients who cannot be completely evaluated by clinical assessment alone. Multiple other large-scale studies are evaluating the prognostic role of amyloid PET imaging for predicting MCI progression to AD in general and high-risk populations. At the same time, amyloid imaging is an important tool for evaluating potential disease-modifying therapies for AD. Overall, the increased use of amyloid PET imaging has led to a better understanding of the strengths and limitations of this imaging modality and how it may best be used with other clinical, molecular, and imaging assessment techniques for the diagnosis and management of neurodegenerative disorders

  8. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  9. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    PubMed Central

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D’Souza, Derek; Thomas, David; Connell, Dylan O’; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-01-01

    Abstract Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated. PMID:28195833

  10. QR images: optimized image embedding in QR codes.

    PubMed

    Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P

    2014-07-01

    This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.

  11. New contrasts for x-ray imaging and synergy with optical imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ge

    2017-02-01

    Due to its penetrating power, fine resolution, unique contrast, high-speed, and cost-effectiveness, x-ray imaging is one of the earliest and most popular imaging modalities in biomedical applications. Current x-ray radiographs and CT images are mostly on gray-scale, since they reflect overall energy attenuation. Recent advances in x-ray detection, contrast agent, and image reconstruction technologies have changed our perception and expectation of x-ray imaging capabilities, and generated an increasing interest in imaging biological soft tissues in terms of energy-sensitive material decomposition, phase-contrast, small angle scattering (also referred to as dark-field), x-ray fluorescence and luminescence properties. These are especially relevant to preclinical and mesoscopic studies, and potentially mendable for hybridization with optical molecular tomography. In this article, we review new x-ray imaging techniques as related to optical imaging, suggest some combined x-ray and optical imaging schemes, and discuss our ideas on micro-modulated x-ray luminescence tomography (MXLT) and x-ray modulated opto-genetics (X-Optogenetics).

  12. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  13. Modified-BRISQUE as no reference image quality assessment for structural MR images.

    PubMed

    Chow, Li Sze; Rajagopal, Heshalini

    2017-11-01

    An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Breast Imaging: The Face of Imaging 3.0.

    PubMed

    Mayo, Ray Cody; Parikh, Jay R

    2016-08-01

    In preparation for impending changes to the health care delivery and reimbursement models, the ACR has provided a roadmap for success via the Imaging 3.0 (®)platform. The authors illustrate how the field of breast imaging demonstrates the following Imaging 3.0 concepts: value, patient-centered care, clinical integration, structured reporting, outcome metrics, and radiology's role in the accountable care organization environment. Much of breast imaging's success may be adapted and adopted by other fields in radiology to ensure that all radiologists become more visible and provide the value sought by patients and payers. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  15. Imaging in anatomy: a comparison of imaging techniques in embalmed human cadavers

    PubMed Central

    2013-01-01

    Background A large variety of imaging techniques is an integral part of modern medicine. Introducing radiological imaging techniques into the dissection course serves as a basis for improved learning of anatomy and multidisciplinary learning in pre-clinical medical education. Methods Four different imaging techniques (ultrasound, radiography, computed tomography, and magnetic resonance imaging) were performed in embalmed human body donors to analyse possibilities and limitations of the respective techniques in this peculiar setting. Results The quality of ultrasound and radiography images was poor, images of computed tomography and magnetic resonance imaging were of good quality. Conclusion Computed tomography and magnetic resonance imaging have a superior image quality in comparison to ultrasound and radiography and offer suitable methods for imaging embalmed human cadavers as a valuable addition to the dissection course. PMID:24156510

  16. Automatic tissue image segmentation based on image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  17. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  18. Animal Detection in Natural Images: Effects of Color and Image Database

    PubMed Central

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  19. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  20. BMC Ecology Image Competition 2016: the winning images.

    PubMed

    Simundza, Julia; Palmer, Matthew; Settele, Josef; Jacobus, Luke M; Hughes, David P; Mazzi, Dominique; Blanchet, Simon

    2016-08-09

    The 2016 BMC Ecology Image Competition marked another celebration of the astounding biodiversity, natural beauty, and biological interactions documented by talented ecologists worldwide. For our fourth annual competition, we welcomed guest judge Dr. Matthew Palmer of Columbia University, who chose the winning image from over 140 entries. In this editorial, we highlight the award winning images along with a selection of highly commended honorable mentions.

  1. Image-fusion of MR spectroscopic images for treatment planning of gliomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang Jenghwa; Thakur, Sunitha; Perera, Gerard

    2006-01-15

    {sup 1}H magnetic resonance spectroscopic imaging (MRSI) can improve the accuracy of target delineation for gliomas, but it lacks the anatomic resolution needed for image fusion. This paper presents a simple protocol for fusing simulation computer tomography (CT) and MRSI images for glioma intensity-modulated radiotherapy (IMRT), including a retrospective study of 12 patients. Each patient first underwent whole-brain axial fluid-attenuated-inversion-recovery (FLAIR) MRI (3 mm slice thickness, no spacing), followed by three-dimensional (3D) MRSI measurements (TE/TR: 144/1000 ms) of a user-specified volume encompassing the extent of the tumor. The nominal voxel size of MRSI ranged from 8x8x10 mm{sup 3} to 12x12x10more » mm{sup 3}. A system was developed to grade the tumor using the choline-to-creatine (Cho/Cr) ratios from each MRSI voxel. The merged MRSI images were then generated by replacing the Cho/Cr value of each MRSI voxel with intensities according to the Cho/Cr grades, and resampling the poorer-resolution Cho/Cr map into the higher-resolution FLAIR image space. The FUNCTOOL processing software was also used to create the screen-dumped MRSI images in which these data were overlaid with each FLAIR MRI image. The screen-dumped MRSI images were manually translated and fused with the FLAIR MRI images. Since the merged MRSI images were intrinsically fused with the FLAIR MRI images, they were also registered with the screen-dumped MRSI images. The position of the MRSI volume on the merged MRSI images was compared with that of the screen-dumped MRSI images and was shifted until agreement was within a predetermined tolerance. Three clinical target volumes (CTVs) were then contoured on the FLAIR MRI images corresponding to the Cho/Cr grades. Finally, the FLAIR MRI images were fused with the simulation CT images using a mutual-information algorithm, yielding an IMRT plan that simultaneously delivers three different dose levels to the three CTVs. The image

  2. [Dry view laser imager--a new economical photothermal imaging method].

    PubMed

    Weberling, R

    1996-11-01

    The production of hard copies is currently achieved by means of laser imagers and wet film processing in systems attached either directly in or to the laser imager or in a darkroom. Variations in image quality resulting from a not always optimal wet film development are frequent. A newly developed thermographic film developer for laser films without liquid powdered chemicals, on the other hand, is environmentally preferable and reducing operating costs. The completely dry developing process provides permanent image documentation meeting the quality and safety requirements of RöV and BAK. One of the currently available systems of this type, the DryView Laser Imager is inexpensive and easy to install. The selective connection principle of the DryView Laser Imager can be expanded as required and accepts digital and/or analog interfaces with all imaging systems (CT, MR, DR, US, NM) from the various manufactures.

  3. Image superresolution of cytology images using wavelet based patch search

    NASA Astrophysics Data System (ADS)

    Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo

    2015-01-01

    Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.

  4. Photoacoustic imaging of teeth for dentine imaging and enamel characterization

    NASA Astrophysics Data System (ADS)

    Periyasamy, Vijitha; Rangaraj, Mani; Pramanik, Manojit

    2018-02-01

    Early detection of dental caries, cracks and lesions is needed to prevent complicated root canal treatment and tooth extraction procedures. Resolution of clinically used x-ray imaging is low, hence optical imaging techniques such as optical coherence tomography, fluorescence imaging, and Raman imaging are widely experimented for imaging dental structures. Photoacoustic effect is used in photon induced photoacoustic streaming technique to debride the root canal. In this study, the extracted teeth were imaged using photoacoustic tomography system at 1064 nm. The degradation of enamel and dentine is an indicator of onset of dental caries. Photoacoustic microscopy (PAM) was used to study the tooth enamel. Images were acquired using acoustic resolution PAM system. This was done to identify microscopic cracks and dental lesion at different anatomical sites (crown and cementum). The PAM tooth profile is an indicator of calcium distribution which is essential for demineralization studies.

  5. Imaging windows for long-term intravital imaging: General overview and technical insights.

    PubMed

    Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco

    2014-01-01

    Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure.

  6. Radar Image Interpretability Analysis.

    DTIC Science & Technology

    1981-01-01

    the measured image properties with respect to image utility changed with image application. This study has provided useful information as to how...Eneea.d) ABSTRACT The utility of radar images with respect to trained image inter - preter ability to identify, classify and detect specific terrain... changed with image applica- tion. This study has provided useful information as to how certain image characteristics relate to radar image utility as

  7. Image-guided filtering for improving photoacoustic tomographic image reconstruction.

    PubMed

    Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2018-06-01

    Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  8. Digital image transformation and rectification of spacecraft and radar images

    NASA Technical Reports Server (NTRS)

    Wu, S. S. C.

    1985-01-01

    The application of digital processing techniques to spacecraft television pictures and radar images is discussed. The use of digital rectification to produce contour maps from spacecraft pictures is described; images with azimuth and elevation angles are converted into point-perspective frame pictures. The digital correction of the slant angle of radar images to ground scale is examined. The development of orthophoto and stereoscopic shaded relief maps from digital terrain and digital image data is analyzed. Digital image transformations and rectifications are utilized on Viking Orbiter and Lander pictures of Mars.

  9. Quantitative imaging biomarker ontology (QIBO) for knowledge representation of biomedical imaging biomarkers.

    PubMed

    Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David

    2013-08-01

    A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.

  10. IMAGE Mission Science

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Fok, M.-C.; Fuselier, S.; Gladstone, G. R.; Green, J. L.; Fung, S. F.; Perez, J.; Reiff, P.; Roelof, E. C.; Wilson, G.

    1998-01-01

    Simultaneous, global measurement of major magnetospheric plasma systems will be performed for the first time with the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) Mission. The ring current, plasmasphere, and auroral systems will be imaged using energetic neutral and ultraviolet cameras. Quantitative remote measurement of the magnetosheath, plasmaspheric, and magnetospheric densities will be obtained through radio sounding by the Radio Plasma Imager. The IMAGE Mission will open a new era in global magnetospheric physics, while bringing with it new challenges in data analysis. An overview of the IMAGE Theory and Modeling team efforts will be presented, including the state of development of Internet tools that will be available to the science community for access and analysis of IMAGE observations.

  11. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    PubMed

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  12. Whole mouse cryo-imaging

    NASA Astrophysics Data System (ADS)

    Wilson, David; Roy, Debashish; Steyer, Grant; Gargesha, Madhusudhana; Stone, Meredith; McKinley, Eliot

    2008-03-01

    The Case cryo-imaging system is a section and image system which allows one to acquire micron-scale, information rich, whole mouse color bright field and molecular fluorescence images of an entire mouse. Cryo-imaging is used in a variety of applications, including mouse and embryo anatomical phenotyping, drug delivery, imaging agents, metastastic cancer, stem cells, and very high resolution vascular imaging, among many. Cryo-imaging fills the gap between whole animal in vivo imaging and histology, allowing one to image a mouse along the continuum from the mouse -> organ -> tissue structure -> cell -> sub-cellular domains. In this overview, we describe the technology and a variety of exciting applications. Enhancements to the system now enable tiled acquisition of high resolution images to cover an entire mouse. High resolution fluorescence imaging, aided by a novel subtraction processing algorithm to remove sub-surface fluorescence, makes it possible to detect fluorescently-labeled single cells. Multi-modality experiments in Magnetic Resonance Imaging and Cryo-imaging of a whole mouse demonstrate superior resolution of cryo-images and efficiency of registration techniques. The 3D results demonstrate the novel true-color volume visualization tools we have developed and the inherent advantage of cryo-imaging in providing unlimited depth of field and spatial resolution. The recent results continue to demonstrate the value cryo-imaging provides in the field of small animal imaging research.

  13. World Wide Web Based Image Search Engine Using Text and Image Content Features

    NASA Astrophysics Data System (ADS)

    Luo, Bo; Wang, Xiaogang; Tang, Xiaoou

    2003-01-01

    Using both text and image content features, a hybrid image retrieval system for Word Wide Web is developed in this paper. We first use a text-based image meta-search engine to retrieve images from the Web based on the text information on the image host pages to provide an initial image set. Because of the high-speed and low cost nature of the text-based approach, we can easily retrieve a broad coverage of images with a high recall rate and a relatively low precision. An image content based ordering is then performed on the initial image set. All the images are clustered into different folders based on the image content features. In addition, the images can be re-ranked by the content features according to the user feedback. Such a design makes it truly practical to use both text and image content for image retrieval over the Internet. Experimental results confirm the efficiency of the system.

  14. Rotational imaging optical coherence tomography for full-body mouse embryonic imaging

    PubMed Central

    Wu, Chen; Sudheendran, Narendran; Singh, Manmohan; Larina, Irina V.; Dickinson, Mary E.; Larin, Kirill V.

    2016-01-01

    Abstract. Optical coherence tomography (OCT) has been widely used to study mammalian embryonic development with the advantages of high spatial and temporal resolutions and without the need for any contrast enhancement probes. However, the limited imaging depth of traditional OCT might prohibit visualization of the full embryonic body. To overcome this limitation, we have developed a new methodology to enhance the imaging range of OCT in embryonic day (E) 9.5 and 10.5 mouse embryos using rotational imaging. Rotational imaging OCT (RI-OCT) enables full-body imaging of mouse embryos by performing multiangle imaging. A series of postprocessing procedures was performed on each cross-section image, resulting in the final composited image. The results demonstrate that RI-OCT is able to improve the visualization of internal mouse embryo structures as compared to conventional OCT. PMID:26848543

  15. Image intensifier-based volume tomographic angiography imaging system: system evaluation

    NASA Astrophysics Data System (ADS)

    Ning, Ruola; Wang, Xiaohui; Shen, Jianjun; Conover, David L.

    1995-05-01

    An image intensifier-based rotational volume tomographic angiography imaging system has been constructed. The system consists of an x-ray tube and an image intensifier that are separately mounted on a gantry. This system uses an image intensifier coupled to a TV camera as a two-dimensional detector so that a set of two-dimensional projections can be acquired for a direct three-dimensional reconstruction (3D). This system has been evaluated with two phantoms: a vascular phantom and a monkey head cadaver. One hundred eighty projections of each phantom were acquired with the system. A set of three-dimensional images were directly reconstructed from the projection data. The experimental results indicate that good imaging quality can be obtained with this system.

  16. Fast single image dehazing based on image fusion

    NASA Astrophysics Data System (ADS)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  17. Correlation plenoptic imaging

    NASA Astrophysics Data System (ADS)

    Pepe, Francesco V.; Di Lena, Francesco; Garuccio, Augusto; D'Angelo, Milena

    2017-06-01

    Plenoptic Imaging (PI) is a novel optical technique for achieving tridimensional imaging in a single shot. In conventional PI, a microlens array is inserted in the native image plane and the sensor array is moved behind the microlenses. On the one hand, the microlenses act as imaging pixels to reproduce the image of the scene; on the other hand, each microlens reproduces on the sensor array an image of the camera lens, thus providing the angular information associated with each imaging pixel. The recorded propagation direction is exploited, in post- processing, to computationally retrace the geometrical light path, thus enabling the refocusing of different planes within the scene, the extension of the depth of field of the acquired image, as well as the 3D reconstruction of the scene. However, a trade-off between spatial and angular resolution is built in the standard plenoptic imaging process. We demonstrate that the second-order spatio-temporal correlation properties of light can be exploited to overcome this fundamental limitation. Using two correlated beams, from either a chaotic or an entangled photon source, we can perform imaging in one arm and simultaneously obtain the angular information in the other arm. In fact, we show that the second order correlation function possesses plenoptic imaging properties (i.e., it encodes both spatial and angular information), and is thus characterized by a key re-focusing and 3D imaging capability. From a fundamental standpoint, the plenoptic application is the first situation where the counterintuitive properties of correlated systems are effectively used to beat intrinsic limits of standard imaging systems. From a practical standpoint, our protocol can dramatically enhance the potentials of PI, paving the way towards its promising applications.

  18. Geometrical Meaning of Arithmetic Series [Image Omitted], [Image Omitted] and [Image Omitted] in Terms of the Elementary Combinatorics

    ERIC Educational Resources Information Center

    Kobayashi, Yukio

    2011-01-01

    The formula [image omitted] is closely related to combinatorics through an elementary geometric exercise. This approach can be expanded to the formulas [image omitted], [image omitted] and [image omitted]. These formulas are also nice examples of showing two approaches, one algebraic and one combinatoric, to a problem of counting. (Contains 6…

  19. Imaging of Muscle Injuries in Sports Medicine: Sports Imaging Series.

    PubMed

    Guermazi, Ali; Roemer, Frank W; Robinson, Philip; Tol, Johannes L; Regatte, Ravindar R; Crema, Michel D

    2017-03-01

    In sports-related muscle injuries, the main goal of the sports medicine physician is to return the athlete to competition-balanced against the need to prevent the injury from worsening or recurring. Prognosis based on the available clinical and imaging information is crucial. Imaging is crucial to confirm and assess the extent of sports-related muscle injuries and may help to guide management, which directly affects the prognosis. This is especially important when the diagnosis or grade of injury is unclear, when recovery is taking longer than expected, and when interventional or surgical management may be necessary. Several imaging techniques are widely available, with ultrasonography and magnetic resonance imaging currently the most frequently applied in sports medicine. This state of the art review will discuss the main imaging modalities for the assessment of sports-related muscle injuries, including advanced imaging techniques, with the focus on the clinical relevance of imaging features of muscle injuries. © RSNA, 2017 Online supplemental material is available for this article.

  20. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  1. Image reconstruction of x-ray tomography by using image J platform

    NASA Astrophysics Data System (ADS)

    Zain, R. M.; Razali, A. M.; Salleh, K. A. M.; Yahya, R.

    2017-01-01

    A tomogram is a technical term for a CT image. It is also called a slice because it corresponds to what the object being scanned would look like if it were sliced open along a plane. A CT slice corresponds to a certain thickness of the object being scanned. So, while a typical digital image is composed of pixels, a CT slice image is composed of voxels (volume elements). In the case of x-ray tomography, similar to x-ray Radiography, the quantity being imaged is the distribution of the attenuation coefficient μ(x) within the object of interest. The different is only on the technique to produce the tomogram. The image of x-ray radiography can be produced straight foward after exposed to x-ray, while the image of tomography produces by combination of radiography images in every angle of projection. A number of image reconstruction methods by converting x-ray attenuation data into a tomography image have been produced by researchers. In this work, Ramp filter in "filtered back projection" has been applied. The linear data acquired at each angular orientation are convolved with a specially designed filter and then back projected across a pixel field at the same angle. This paper describe the step of using Image J software to produce image reconstruction of x-ray tomography.

  2. Nanophotonic Image Sensors

    PubMed Central

    Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.

    2016-01-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941

  3. Fetal intracranial hemorrhage. Imaging by ultrasound and magnetic resonance imaging.

    PubMed

    Kirkinen, P; Partanen, K; Ryynänen, M; Ordén, M R

    1997-08-01

    To describe the magnetic resonance imaging (MRI) findings associated with fetal intracranial hemorrhage and to compare them with ultrasound findings. In four pregnancies complicated by fetal intracranial hemorrhage, fetal imaging was carried out using T2-weighted fast spin echo sequences and T1-weighted fast low angle shot imaging sequences and by transabdominal ultrasonography. An antepartum diagnosis of hemorrhage was made by ultrasound in one case and by MRI in two. Retrospectively, the hemorrhagic area could be identified from the MRI images in an additional two cases and from the ultrasound images in one case. In the cases of intraventricular hemorrhage, the MRI signal intensity in the T1-weighted images was increased in the hemorrhagic area as compared to the contralateral ventricle and brain parenchyma. In a case with subdural hemorrhage, T2-weighted MRI signals from the hemorrhagic area changed from low-to high-intensity signals during four weeks of follow-up. Better imaging of the intracranial anatomy was possible by MRI than by transabdominal ultrasonography. MRI can be used for imaging and dating fetal intracranial hemorrhages. Variable ultrasound and MRI findings are associated with this complication, depending on the age and location of the hemorrhage.

  4. Image registration for a UV-Visible dual-band imaging system

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua

    2018-06-01

    The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.

  5. Image flows and one-liner graphical image representation.

    PubMed

    Makhervaks, Vadim; Barequet, Gill; Bruckstein, Alfred

    2002-10-01

    This paper introduces a novel graphical image representation consisting of a single curve-the one-liner. The first step of the algorithm involves the detection and ranking of image edges. A new edge exploration technique is used to perform both tasks simultaneously. This process is based on image flows. It uses a gradient vector field and a new operator to explore image edges. Estimation of the derivatives of the image is performed by using local Taylor expansions in conjunction with a weighted least-squares method. This process finds all the possible image edges without any pruning, and collects information that allows the edges found to be prioritized. This enables the most important edges to be selected to form a skeleton of the representation sought. The next step connects the selected edges into one continuous curve-the one-liner. It orders the selected edges and determines the curves connecting them. These two problems are solved separately. Since the abstract graph setting of the first problem is NP-complete, we reduce it to a variant of the traveling salesman problem and compute an approximate solution to it. We solve the second problem by using Dijkstra's shortest-path algorithm. The full software implementation for the entire one-liner determination process is available.

  6. Research-oriented image registry for multimodal image integration.

    PubMed

    Tanaka, M; Sadato, N; Ishimori, Y; Yonekura, Y; Yamashita, Y; Komuro, H; Hayahsi, N; Ishii, Y

    1998-01-01

    To provide multimodal biomedical images automatically, we constructed the research-oriented image registry, Data Delivery System (DDS). DDS was constructed on the campus local area network. Machines which generate images (imagers: DSA, ultrasound, PET, MRI, SPECT and CT) were connected to the campus LAN. Once a patient is registered, all his images are automatically picked up by DDS as they are generated, transferred through the gateway server to the intermediate server, and copied into the directory of the user who registered the patient. DDS informs the user through e-mail that new data have been generated and transferred. Data format is automatically converted into one which is chosen by the user. Data inactive for a certain period in the intermediate server are automatically achieved into the final and permanent data server based on compact disk. As a soft link is automatically generated through this step, a user has access to all (old or new) image data of the patient of his interest. As DDS runs with minimal maintenance, cost and time for data transfer are significantly saved. By making the complex process of data transfer and conversion invisible, DDS has made it easy for naive-to-computer researchers to concentrate on their biomedical interest.

  7. The ImageJ ecosystem: an open platform for biomedical image analysis

    PubMed Central

    Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  8. Canonical Images

    ERIC Educational Resources Information Center

    Hewitt, Dave

    2007-01-01

    In this article, the author offers two well-known mathematical images--that of a dot moving around a circle; and that of the tens chart--and considers their power for developing mathematical thinking. In his opinion, these images each contain the essence of a particular topic of mathematics. They are contrasting images in the sense that they deal…

  9. Choriocapillaris Imaging Using Multiple En Face Optical Coherence Tomography Angiography Image Averaging.

    PubMed

    Uji, Akihito; Balasubramanian, Siva; Lei, Jianqin; Baghdasaryan, Elmira; Al-Sheikh, Mayss; Sadda, SriniVas R

    2017-11-01

    Imaging of the choriocapillaris in vivo is challenging with existing technology. Optical coherence tomography angiography (OCTA), if optimized, could make the imaging less challenging. To investigate multiple en face image averaging on OCTA images of the choriocapillaris. Observational, cross-sectional case series at a referral institutional practice in Los Angeles, California. From the original cohort of 21 healthy individuals, 17 normal eyes of 17 participants were included in the study. The study dates were August to September 2016. All participants underwent OCTA imaging of the macula covering a 3 × 3-mm area using OCTA software (Cirrus 5000 with AngioPlex; Carl Zeiss Meditec). One eye per participant was repeatedly imaged to obtain 9 OCTA cube scan sets. Registration was first performed using superficial capillary plexus images, and this transformation was then applied to the choriocapillaris images. The 9 registered choriocapillaris images were then averaged. Quantitative parameters were measured on binarized OCTA images and compared with the unaveraged OCTA images. Vessel caliber measurement. Seventeen eyes of 17 participants (mean [SD] age, 35.1 [6.0] years; 9 [53%] female; and 9 [53%] of white race/ethnicity) with sufficient image quality were included in this analysis. The single unaveraged images demonstrated a granular appearance, and the vascular pattern was difficult to discern. After averaging, en face choriocapillaris images showed a meshwork appearance. The mean (SD) diameter of the vessels was 22.8 (5.8) µm (range, 9.6-40.2 µm). Compared with the single unaveraged images, the averaged images showed more flow voids (1423 flow voids [95% CI, 967-1909] vs 1254 flow voids [95% CI, 825-1683], P < .001), smaller average size of the flow voids (911 [95% CI, 301-1521] µm2 vs 1364 [95% CI, 645-2083] µm2, P < .001), and greater vessel density (70.7% [95% CI, 61.9%-79.5%] vs 61.9% [95% CI, 56.0%-67.8%], P < .001). The distribution of the

  10. Onion cell imaging by using Talbot/self-imaging effect

    NASA Astrophysics Data System (ADS)

    Agarwal, Shilpi; Kumar, Varun; Shakher, Chandra

    2017-08-01

    This paper presents the amplitude and phase imaging of onion epidermis cell using the self-imaging capabilities of a grating (Talbot effect) in visible light region. In proposed method, the Fresnel diffraction pattern from the first grating and object is recorded at self-image plane. Fast Fourier Transform (FFT) is used for extracting the 3D amplitude and phase image of onion epidermis cell. The stability of the proposed system, from environmental perturbation as well as its compactness and portability give the proposed system a high potential for several clinical applications.

  11. Content-based image retrieval from a database of fracture images

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Do Hoang, Phuong Anh; Depeursinge, Adrien; Hoffmeyer, Pierre; Stern, Richard; Lovis, Christian; Geissbuhler, Antoine

    2007-03-01

    This article describes the use of a medical image retrieval system on a database of 16'000 fractures, selected from surgical routine over several years. Image retrieval has been a very active domain of research for several years. It was frequently proposed for the medical domain, but only few running systems were ever tested in clinical routine. For the planning of surgical interventions after fractures, x-ray images play an important role. The fractures are classified according to exact fracture location, plus whether and to which degree the fracture is damaging articulations to see how complicated a reparation will be. Several classification systems for fractures exist and the classification plus the experience of the surgeon lead in the end to the choice of surgical technique (screw, metal plate, ...). This choice is strongly influenced by the experience and knowledge of the surgeons with respect to a certain technique. Goal of this article is to describe a prototype that supplies similar cases to an example to help treatment planning and find the most appropriate technique for a surgical intervention. Our database contains over 16'000 fracture images before and after a surgical intervention. We use an image retrieval system (GNU Image Finding Tool, GIFT) to find cases/images similar to an example case currently under observation. Problems encountered are varying illumination of images as well as strong anatomic differences between patients. Regions of interest are usually small and the retrieval system needs to focus on this region. Results show that GIFT is capable of supplying similar cases, particularly when using relevance feedback, on such a large database. Usual image retrieval is based on a single image as search target but for this application we have to select images by case as similar cases need to be found and not images. A few false positive cases often remain in the results but they can be sorted out quickly by the surgeons. Image retrieval can

  12. Optical image encryption using multilevel Arnold transform and noninterferometric imaging

    NASA Astrophysics Data System (ADS)

    Chen, Wen; Chen, Xudong

    2011-11-01

    Information security has attracted much current attention due to the rapid development of modern technologies, such as computer and internet. We propose a novel method for optical image encryption using multilevel Arnold transform and rotatable-phase-mask noninterferometric imaging. An optical image encryption scheme is developed in the gyrator transform domain, and one phase-only mask (i.e., phase grating) is rotated and updated during image encryption. For the decryption, an iterative retrieval algorithm is proposed to extract high-quality plaintexts. Conventional encoding methods (such as digital holography) have been proven vulnerably to the attacks, and the proposed optical encoding scheme can effectively eliminate security deficiency and significantly enhance cryptosystem security. The proposed strategy based on the rotatable phase-only mask can provide a new alternative for data/image encryption in the noninterferometric imaging.

  13. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  14. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    NASA Astrophysics Data System (ADS)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J.

    2015-09-01

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min-1 with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels-1.

  15. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  16. Dynamic flat panel detector versus image intensifier in cardiac imaging: dose and image quality

    NASA Astrophysics Data System (ADS)

    Vano, E.; Geiger, B.; Schreiner, A.; Back, C.; Beissel, J.

    2005-12-01

    The practical aspects of the dosimetric and imaging performance of a digital x-ray system for cardiology procedures were evaluated. The system was configured with an image intensifier (II) and later upgraded to a dynamic flat panel detector (FD). Entrance surface air kerma (ESAK) to phantoms of 16, 20, 24 and 28 cm of polymethyl methacrylate (PMMA) and the image quality of a test object were measured. Images were evaluated directly on the monitor and with numerical methods (noise and signal-to-noise ratio). Information contained in the DICOM header for dosimetry audit purposes was also tested. ESAK values per frame (or kerma rate) for the most commonly used cine and fluoroscopy modes for different PMMA thicknesses and for field sizes of 17 and 23 cm for II, and 20 and 25 cm for FD, produced similar results in the evaluated system with both technologies, ranging between 19 and 589 µGy/frame (cine) and 5 and 95 mGy min-1 (fluoroscopy). Image quality for these dose settings was better for the FD version. The 'study dosimetric report' is comprehensive, and its numerical content is sufficiently accurate. There is potential in the future to set those systems with dynamic FD to lower doses than are possible in the current II versions, especially for digital cine runs, or to benefit from improved image quality.

  17. Standardized food images: A photographing protocol and image database.

    PubMed

    Charbonnier, Lisette; van Meer, Floor; van der Laan, Laura N; Viergever, Max A; Smeets, Paul A M

    2016-01-01

    The regulation of food intake has gained much research interest because of the current obesity epidemic. For research purposes, food images are a good and convenient alternative for real food because many dietary decisions are made based on the sight of foods. Food pictures are assumed to elicit anticipatory responses similar to real foods because of learned associations between visual food characteristics and post-ingestive consequences. In contemporary food science, a wide variety of images are used which introduces between-study variability and hampers comparison and meta-analysis of results. Therefore, we created an easy-to-use photographing protocol which enables researchers to generate high resolution food images appropriate for their study objective and population. In addition, we provide a high quality standardized picture set which was characterized in seven European countries. With the use of this photographing protocol a large number of food images were created. Of these images, 80 were selected based on their recognizability in Scotland, Greece and The Netherlands. We collected image characteristics such as liking, perceived calories and/or perceived healthiness ratings from 449 adults and 191 children. The majority of the foods were recognized and liked at all sites. The differences in liking ratings, perceived calories and perceived healthiness between sites were minimal. Furthermore, perceived caloric content and healthiness ratings correlated strongly (r ≥ 0.8) with actual caloric content in both adults and children. The photographing protocol as well as the images and the data are freely available for research use on http://nutritionalneuroscience.eu/. By providing the research community with standardized images and the tools to create their own, comparability between studies will be improved and a head-start is made for a world-wide standardized food image database. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Performance assessment of imaging plates for the JHR transfer Neutron Imaging System

    NASA Astrophysics Data System (ADS)

    Simon, E.; Guimbal, P. AB(; )

    2018-01-01

    The underwater Neutron Imaging System to be installed in the Jules Horowitz Reactor (JHR-NIS) is based on a transfer method using a neutron activated beta-emitter like Dysprosium. The information stored in the converter is to be offline transferred on a specific imaging system, still to be defined. Solutions are currently under investigation for the JHR-NIS in order to anticipate the disappearance of radiographic films commonly used in these applications. We report here the performance assessment of Computed Radiography imagers (Imaging Plates) performed at LLB/Orphée (CEA Saclay). Several imaging plate types are studied, in one hand in the configuration involving an intimate contact with an activated dysprosium foil converter: Fuji BAS-TR, Fuji UR-1 and Carestream Flex XL Blue imaging plates, and in the other hand by using a prototypal imaging plate doped with dysprosium and thus not needing any contact with a separate converter foil. The results for these imaging plates are compared with those obtained with gadolinium doped imaging plate used in direct neutron imaging (Fuji BAS-ND). The detection performances of the different imagers are compared regarding resolution and noise. The many advantages of using imaging plates over radiographic films (high sensitivity, linear response, high dynamic range) could palliate its lower intrinsic resolution.

  20. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.

    PubMed

    Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun

    2018-06-01

    Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.

  1. Binary-space-partitioned images for resolving image-based visibility.

    PubMed

    Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J

    2004-01-01

    We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.

  2. Dual-axis reflective continuous-wave terahertz confocal scanning polarization imaging and image fusion

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Li, Qi

    2017-01-01

    A dual-axis reflective continuous-wave terahertz (THz) confocal scanning polarization imaging system was adopted. THz polarization imaging experiments on gaps on film and metallic letters "BeLLE" were carried out. Imaging results indicate that the THz polarization imaging is sensitive to the tilted gap or wide flat gap, suggesting the THz polarization imaging is able to detect edges and stains. An image fusion method based on the digital image processing was proposed to ameliorate the imaging quality of metallic letters "BeLLE." Objective and subjective evaluation both prove that this method can improve the imaging quality.

  3. Improvement of sidestream dark field imaging with an image acquisition stabilizer.

    PubMed

    Balestra, Gianmarco M; Bezemer, Rick; Boerma, E Christiaan; Yong, Ze-Yie; Sjauw, Krishan D; Engstrom, Annemarie E; Koopmans, Matty; Ince, Can

    2010-07-13

    In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS) for Sidestream Dark Field (SDF) imaging. The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates adhesion to the imaged tissue by application of negative pressure. The effects of the IAS on the sublingual microcirculatory flow velocities, the force required to induce pressure artifacts (PA), the time to acquire a stable image, and the duration of stable imaging were assessed in healthy volunteers. To demonstrate the clinical applicability of the SDF setup in combination with the IAS, simultaneous bilateral sublingual imaging of the microcirculation were performed during a lung recruitment maneuver (LRM) in mechanically ventilated critically ill patients. One SDF device was operated handheld; the second was fitted with the IAS and held in position by a mechanic arm. Lateral drift, number of losses of image stability and duration of stable imaging of the two methods were compared. Five healthy volunteers were studied. The IAS did not affect microcirculatory flow velocities. A significantly greater force had to applied onto the tissue to induced PA with compared to without IAS (0.25 +/- 0.15 N without vs. 0.62 +/- 0.05 N with the IAS, p < 0.001). The IAS ensured an increased duration of a stable image sequence (8 +/- 2 s without vs. 42 +/- 8 s with the IAS, p < 0.001). The time required to obtain a stable image sequence was similar with and without the IAS. In eight mechanically ventilated patients undergoing a LRM the use of the IAS resulted in a significantly reduced image drifting and enabled the acquisition of significantly longer stable image sequences (24 +/- 5 s without vs. 67 +/- 14 s with the IAS, p = 0.006). The present study has validated the use of an IAS for improvement of SDF imaging by demonstrating that the IAS did not affect microcirculatory perfusion in the microscopic field of view

  4. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    PubMed

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  5. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  6. Scanned Image Projection System Employing Intermediate Image Plane

    NASA Technical Reports Server (NTRS)

    DeJong, Christian Dean (Inventor); Hudman, Joshua M. (Inventor)

    2014-01-01

    In imaging system, a spatial light modulator is configured to produce images by scanning a plurality light beams. A first optical element is configured to cause the plurality of light beams to converge along an optical path defined between the first optical element and the spatial light modulator. A second optical element is disposed between the spatial light modulator and a waveguide. The first optical element and the spatial light modulator are arranged such that an image plane is created between the spatial light modulator and the second optical element. The second optical element is configured to collect the diverging light from the image plane and collimate it. The second optical element then delivers the collimated light to a pupil at an input of the waveguide.

  7. Subjective matters: from image quality to image psychology

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; De Ridder, Huib

    2013-03-01

    From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.

  8. SoilJ - An ImageJ plugin for semi-automatized image-processing of 3-D X-ray images of soil columns

    NASA Astrophysics Data System (ADS)

    Koestel, John

    2016-04-01

    3-D X-ray imaging is a formidable tool for quantifying soil structural properties which are known to be extremely diverse. This diversity necessitates the collection of large sample sizes for adequately representing the spatial variability of soil structure at a specific sampling site. One important bottleneck of using X-ray imaging is however the large amount of time required by a trained specialist to process the image data which makes it difficult to process larger amounts of samples. The software SoilJ aims at removing this bottleneck by automatizing most of the required image processing steps needed to analyze image data of cylindrical soil columns. SoilJ is a plugin of the free Java-based image-processing software ImageJ. The plugin is designed to automatically process all images located with a designated folder. In a first step, SoilJ recognizes the outlines of the soil column upon which the column is rotated to an upright position and placed in the center of the canvas. Excess canvas is removed from the images. Then, SoilJ samples the grey values of the column material as well as the surrounding air in Z-direction. Assuming that the column material (mostly PVC of aluminium) exhibits a spatially constant density, these grey values serve as a proxy for the image illumination at a specific Z-coordinate. Together with the grey values of the air they are used to correct image illumination fluctuations which often occur along the axis of rotation during image acquisition. SoilJ includes also an algorithm for beam-hardening artefact removal and extended image segmentation options. Finally, SoilJ integrates the morphology analyses plugins of BoneJ (Doube et al., 2006, BoneJ Free and extensible bone image analysis in ImageJ. Bone 47: 1076-1079) and provides an ASCII file summarizing these measures for each investigated soil column, respectively. In the future it is planned to integrate SoilJ into FIJI, the maintained and updated edition of ImageJ with selected

  9. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  10. Nanophotonic Image Sensors.

    PubMed

    Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S

    2016-09-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Medical imaging systems

    DOEpatents

    Frangioni, John V [Wayland, MA

    2012-07-24

    A medical imaging system provides simultaneous rendering of visible light and fluorescent images. The system may employ dyes in a small-molecule form that remains in a subject's blood stream for several minutes, allowing real-time imaging of the subject's circulatory system superimposed upon a conventional, visible light image of the subject. The system may also employ dyes or other fluorescent substances associated with antibodies, antibody fragments, or ligands that accumulate within a region of diagnostic significance. In one embodiment, the system provides an excitation light source to excite the fluorescent substance and a visible light source for general illumination within the same optical guide that is used to capture images. In another embodiment, the system is configured for use in open surgical procedures by providing an operating area that is closed to ambient light. More broadly, the systems described herein may be used in imaging applications where a visible light image may be usefully supplemented by an image formed from fluorescent emissions from a fluorescent substance that marks areas of functional interest.

  12. First Images from the Focusing Optics X-Ray Solar Imager

    NASA Astrophysics Data System (ADS)

    Krucker, Säm; Christe, Steven; Glesener, Lindsay; Ishikawa, Shin-nosuke; Ramsey, Brian; Takahashi, Tadayuki; Watanabe, Shin; Saito, Shinya; Gubarev, Mikhail; Kilaru, Kiranmayee; Tajima, Hiroyasu; Tanaka, Takaaki; Turin, Paul; McBride, Stephen; Glaser, David; Fermin, Jose; White, Stephen; Lin, Robert

    2014-10-01

    The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload flew for the first time on 2012 November 2, producing the first focused images of the Sun above 5 keV. To enable hard X-ray (HXR) imaging spectroscopy via direct focusing, FOXSI makes use of grazing-incidence replicated optics combined with fine-pitch solid-state detectors. On its first flight, FOXSI observed several targets that included active regions, the quiet Sun, and a GOES-class B2.7 microflare. This Letter provides an introduction to the FOXSI instrument and presents its first solar image. These data demonstrate the superiority in sensitivity and dynamic range that is achievable with a direct HXR imager with respect to previous, indirect imaging methods, and illustrate the technological readiness for a spaceborne mission to observe HXRs from solar flares via direct focusing optics.

  13. Mirror image agnosia.

    PubMed

    Chandra, Sadanandavalli Retnaswami; Issac, Thomas Gregor

    2014-10-01

    Gnosis is a modality-specific ability to access semantic knowledge of an object or stimulus in the presence of normal perception. Failure of this is agnosia or disorder of recognition. It can be highly selective within a mode. self-images are different from others as none has seen one's own image except in reflection. Failure to recognize this image can be labeled as mirror image agnosia or Prosopagnosia for reflected self-image. Whereas mirror agnosia is a well-recognized situation where the person while looking at reflected images of other objects in the mirror he imagines that the objects are in fact inside the mirror and not outside. Five patients, four females, and one male presented with failure to recognize reflected self-image, resulting in patients conversing with the image as a friend, fighting because the person in mirror is wearing her nose stud, suspecting the reflected self-image to be an intruder; but did not have prosopagnosia for others faces, non living objects on self and also apraxias except dressing apraxia in one patient. This phenomena is new to our knowledge. Mirror image agnosia is an unique phenomena which is seen in patients with parietal lobe atrophy without specificity to a category of dementing illness and seems to disappear as disease advances. Reflected self-images probably have a specific neural substrate that gets affected very early in posterior dementias specially the ones which predominantly affect the right side. At that phase most patients are mistaken as suffering from psychiatric disorder as cognition is moderately preserved. As disease becomes more widespread this symptom becomes masked. A high degree of suspicion and proper assessment might help physicians to recognize the organic cause of the symptom so that early therapeutic interventions can be initiated. Further assessment of the symptom with FMRI and PET scan is likely to solve the mystery of how brain handles reflected self-images. A new observation involving failure

  14. Mirror Image Agnosia

    PubMed Central

    Chandra, Sadanandavalli Retnaswami; Issac, Thomas Gregor

    2014-01-01

    Background: Gnosis is a modality-specific ability to access semantic knowledge of an object or stimulus in the presence of normal perception. Failure of this is agnosia or disorder of recognition. It can be highly selective within a mode. self-images are different from others as none has seen one's own image except in reflection. Failure to recognize this image can be labeled as mirror image agnosia or Prosopagnosia for reflected self-image. Whereas mirror agnosia is a well-recognized situation where the person while looking at reflected images of other objects in the mirror he imagines that the objects are in fact inside the mirror and not outside. Material and Methods:: Five patients, four females, and one male presented with failure to recognize reflected self-image, resulting in patients conversing with the image as a friend, fighting because the person in mirror is wearing her nose stud, suspecting the reflected self-image to be an intruder; but did not have prosopagnosia for others faces, non living objects on self and also apraxias except dressing apraxia in one patient. This phenomena is new to our knowledge. Results: Mirror image agnosia is an unique phenomena which is seen in patients with parietal lobe atrophy without specificity to a category of dementing illness and seems to disappear as disease advances. Discussion: Reflected self-images probably have a specific neural substrate that gets affected very early in posterior dementias specially the ones which predominantly affect the right side. At that phase most patients are mistaken as suffering from psychiatric disorder as cognition is moderately preserved. As disease becomes more widespread this symptom becomes masked. A high degree of suspicion and proper assessment might help physicians to recognize the organic cause of the symptom so that early therapeutic interventions can be initiated. Further assessment of the symptom with FMRI and PET scan is likely to solve the mystery of how brain handles

  15. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  16. Integrating medical imaging analyses through a high-throughput bundled resource imaging system

    NASA Astrophysics Data System (ADS)

    Covington, Kelsie; Welch, E. Brian; Jeong, Ha-Kyu; Landman, Bennett A.

    2011-03-01

    Exploitation of advanced, PACS-centric image analysis and interpretation pipelines provides well-developed storage, retrieval, and archival capabilities along with state-of-the-art data providence, visualization, and clinical collaboration technologies. However, pursuit of integrated medical imaging analysis through a PACS environment can be limiting in terms of the overhead required to validate, evaluate and integrate emerging research technologies. Herein, we address this challenge through presentation of a high-throughput bundled resource imaging system (HUBRIS) as an extension to the Philips Research Imaging Development Environment (PRIDE). HUBRIS enables PACS-connected medical imaging equipment to invoke tools provided by the Java Imaging Science Toolkit (JIST) so that a medical imaging platform (e.g., a magnetic resonance imaging scanner) can pass images and parameters to a server, which communicates with a grid computing facility to invoke the selected algorithms. Generated images are passed back to the server and subsequently to the imaging platform from which the images can be sent to a PACS. JIST makes use of an open application program interface layer so that research technologies can be implemented in any language capable of communicating through a system shell environment (e.g., Matlab, Java, C/C++, Perl, LISP, etc.). As demonstrated in this proof-of-concept approach, HUBRIS enables evaluation and analysis of emerging technologies within well-developed PACS systems with minimal adaptation of research software, which simplifies evaluation of new technologies in clinical research and provides a more convenient use of PACS technology by imaging scientists.

  17. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated aftermore » imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.« less

  18. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  19. From Roentgen to magnetic resonance imaging: the history of medical imaging.

    PubMed

    Scatliff, James H; Morris, Peter J

    2014-01-01

    Medical imaging has advanced in remarkable ways since the discovery of x-rays 120 years ago. Today's radiologists can image the human body in intricate detail using computed tomography, magnetic resonance imaging, positron emission tomography, ultrasound, and various other modalities. Such technology allows for improved screening, diagnosis, and monitoring of disease, but it also comes with risks. Many imaging modalities expose patients to ionizing radiation, which potentially increases their risk of developing cancer in the future, and imaging may also be associated with possible allergic reactions or risks related to the use of intravenous contrast agents. In addition, the financial costs of imaging are taxing our health care system, and incidental findings can trigger anxiety and further testing. This issue of the NCMJ addresses the pros and cons of medical imaging and discusses in detail the following uses of medical imaging: screening for breast cancer with mammography, screening for osteoporosis and monitoring of bone mineral density with dual-energy x-ray absorptiometry, screening for congenital hip dysplasia in infants with ultrasound, and evaluation of various heart conditions with cardiac imaging. Together, these articles show the challenges that must be met as we seek to harness the power of today's imaging technologies, as well as the potential benefits that can be achieved when these hurdles are overcome.

  20. Image dissemination and archiving.

    PubMed

    Robertson, Ian

    2007-08-01

    Images generated as part of the sonographic examination are an integral part of the medical record and must be retained according to local regulations. The standard medical image format, known as DICOM (Digital Imaging and COmmunications in Medicine) makes it possible for images from many different imaging modalities, including ultrasound, to be distributed via a standard internet network to distant viewing workstations and a central archive in an almost seamless fashion. The DICOM standard is a truly universal standard for the dissemination of medical images. When purchasing an ultrasound unit, the consumer should research the unit's capacity to generate images in a DICOM format, especially if one wishes interconnectivity with viewing workstations and an image archive that stores other medical images. PACS, an acronym for Picture Archive and Communication System refers to the infrastructure that links modalities, workstations, the image archive, and the medical record information system into an integrated system, allowing for efficient electronic distribution and storage of medical images and access to medical record data.

  1. Image2000: A Free, Innovative, Java Based Imaging Package

    NASA Technical Reports Server (NTRS)

    Pell, Nicholas; Wheeler, Phil; Cornwell, Carl; Matusow, David; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center's (GSFC) Scientific and Educational Endeavors (SEE) and the Center for Image Processing in Education (CIPE) use satellite image processing as part of their science lessons developed for students and educators. The image processing products that they use, as part of these lessons, no longer fulfill the needs of SEE and CIPE because these products are either dependent on a particular computing platform, hard to customize and extend, or do not have enough functionality. SEE and CIPE began looking for what they considered the "perfect" image processing tool that was platform independent, rich in functionality and could easily be extended and customized for their purposes. At the request of SEE, NASA's GSFC, code 588 the Advanced Architectures and Automation Branch developed a powerful new Java based image processing endeavors.

  2. Saliency image of feature building for image quality assessment

    NASA Astrophysics Data System (ADS)

    Ju, Xinuo; Sun, Jiyin; Wang, Peng

    2011-11-01

    The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.

  3. [Fundus Autofluorescence Imaging].

    PubMed

    Schmitz-Valckenberg, S

    2015-09-01

    Fundus autofluorescence (FAF) imaging allows for non-invasive mapping of changes at the level of the retinal pigment epithelium/photoreceptor complex and of alterations of macular pigment distribution. This imaging method is based on the visualisation of intrinsic fluorophores and may be easily and rapidly used in routine patient care. Main applications include degenerative disorders of the outer retina such as age-related macular degeneration, hereditary and acquired retinal diseases. FAF imaging is particularly helpful for differential diagnosis, detection and extent of involved retinal areas, structural-functional correlations and monitoring of changes over time. Recent developments include - in addition to the original application of short wavelength light for excitation ("blue" FAF imaging) - the use of other wavelength ranges ("green" or "near-infrared" FAF imaging), widefield imaging for visualisation of peripheral retinal areas and quantitative FAF imaging. Georg Thieme Verlag KG Stuttgart · New York.

  4. Body Imaging

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Magnetic Resonance Imaging (MRI) and Computer-aided Tomography (CT) images are often complementary. In most cases, MRI is good for viewing soft tissue but not bone, while CT images are good for bone but not always good for soft tissue discrimination. Physicians and engineers in the Department of Radiology at the University of Michigan Hospitals are developing a technique for combining the best features of MRI and CT scans to increase the accuracy of discriminating one type of body tissue from another. One of their research tools is a computer program called HICAP. The program can be used to distinguish between healthy and diseased tissue in body images.

  5. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  6. PET Imaging - from Physics to Clinical Molecular Imaging

    NASA Astrophysics Data System (ADS)

    Majewski, Stan

    2008-03-01

    From the beginnings many years ago in a few physics laboratories and first applications as a research brain function imager, PET became lately a leading molecular imaging modality used in diagnosis, staging and therapy monitoring of cancer, as well as has increased use in assessment of brain function (early diagnosis of Alzheimer's, etc) and in cardiac function. To assist with anatomic structure map and with absorption correction CT is often used with PET in a duo system. Growing interest in the last 5-10 years in dedicated organ specific PET imagers (breast, prostate, brain, etc) presents again an opportunity to the particle physics instrumentation community to contribute to the important field of medical imaging. In addition to the bulky standard ring structures, compact, economical and high performance mobile imagers are being proposed and build. The latest development in standard PET imaging is introduction of the well known TOF concept enabling clearer tomographic pictures of the patient organs. Development and availability of novel photodetectors such as Silicon PMT immune to magnetic fields offers an exciting opportunity to use PET in conjunction with MRI and fMRI. As before with avalanche photodiodes, particle physics community plays a leading role in developing these devices. The presentation will mostly focus on present and future opportunities for better PET designs based on new technologies and methods: new scintillators, photodetectors, readout, software.

  7. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  8. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  9. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  10. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    NASA Astrophysics Data System (ADS)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  11. Imaging of the hip joint. Computed tomography versus magnetic resonance imaging

    NASA Technical Reports Server (NTRS)

    Lang, P.; Genant, H. K.; Jergesen, H. E.; Murray, W. R.

    1992-01-01

    The authors reviewed the applications and limitations of computed tomography (CT) and magnetic resonance (MR) imaging in the assessment of the most common hip disorders. Magnetic resonance imaging is the most sensitive technique in detecting osteonecrosis of the femoral head. Magnetic resonance reflects the histologic changes associated with osteonecrosis very well, which may ultimately help to improve staging. Computed tomography can more accurately identify subchondral fractures than MR imaging and thus remains important for staging. In congenital dysplasia of the hip, the position of the nonossified femoral head in children less than six months of age can only be inferred by indirect signs on CT. Magnetic resonance imaging demonstrates the cartilaginous femoral head directly without ionizing radiation. Computed tomography remains the imaging modality of choice for evaluating fractures of the hip joint. In some patients, MR imaging demonstrates the fracture even when it is not apparent on radiography. In neoplasm, CT provides better assessment of calcification, ossification, and periosteal reaction than MR imaging. Magnetic resonance imaging, however, represents the most accurate imaging modality for evaluating intramedullary and soft-tissue extent of the tumor and identifying involvement of neurovascular bundles. Magnetic resonance imaging can also be used to monitor response to chemotherapy. In osteoarthrosis and rheumatoid arthritis of the hip, both CT and MR provide more detailed assessment of the severity of disease than conventional radiography because of their tomographic nature. Magnetic resonance imaging is unique in evaluating cartilage degeneration and loss, and in demonstrating soft-tissue alterations such as inflammatory synovial proliferation.

  12. A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes

    NASA Astrophysics Data System (ADS)

    Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-05-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  13. A Comparison of Visual Statistics for the Image Enhancement of FORESITE Aerial Images with Those of Major Image Classes

    NASA Technical Reports Server (NTRS)

    Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-01-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  14. Improving image reconstruction of bioluminescence imaging using a priori information from ultrasound imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.

    2017-03-01

    Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.

  15. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    PubMed

    Della Mea, Vincenzo; Baroni, Giulia L; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  16. Multiple enface image averaging for enhanced optical coherence tomography angiography imaging.

    PubMed

    Uji, Akihito; Balasubramanian, Siva; Lei, Jianqin; Baghdasaryan, Elmira; Al-Sheikh, Mayss; Borrelli, Enrico; Sadda, SriniVas R

    2018-05-31

    To investigate the effect of multiple enface image averaging on image quality of the optical coherence tomography angiography (OCTA). Twenty-one normal volunteers were enrolled in this study. For each subject, one eye was imaged with 3 × 3 mm scan protocol, and another eye was imaged with the 6 × 6 mm scan protocol centred on the fovea using the ZEISS Angioplex™ spectral-domain OCTA device. Eyes were repeatedly imaged to obtain nine OCTA cube scan sets, and nine superficial capillary plexus (SCP) and deep capillary plexus (DCP) were individually averaged after registration. Eighteen eyes with a 3 × 3 mm scan field and 14 eyes with a 6 × 6 mm scan field were studied. Averaged images showed more continuous vessels and less background noise in both the SCP and the DCP as the number of frames used for averaging increased, with both 3 × 3 and 6 × 6 mm scan protocols. The intensity histogram of the vessels dramatically changed after averaging. Contrast-to-noise ratio (CNR) and subjectively assessed image quality scores also increased as the number of frames used for averaging increased in all image types. However, the additional benefit in quality diminished when averaging more than five frames. Averaging only three frames achieved significant improvement in CNR and the score assigned by certified grades. Use of multiple image averaging in OCTA enface images was found to be both objectively and subjectively effective for enhancing image quality. These findings may of value for developing optimal OCTA imaging protocols for future studies. © 2018 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  17. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  18. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  19. BMC Ecology image competition 2014: the winning images.

    PubMed

    Harold, Simon; Henderson, Caspar; Baguette, Michel; Bonsall, Michael B; Hughes, David; Settele, Josef

    2014-08-29

    BMC Ecology showcases the winning entries from its second Ecology Image Competition. More than 300 individual images were submitted from an international array of research scientists, depicting life on every continent on earth. The journal's Editorial Board and guest judge Caspar Henderson outline why their winning selections demonstrated high levels of technical skill and aesthetic sense in depicting the science of ecology, and we also highlight a small selection of highly commended images that we simply couldn't let you miss out on.

  20. FIRST IMAGES FROM THE FOCUSING OPTICS X-RAY SOLAR IMAGER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krucker, Säm; Glesener, Lindsay; Turin, Paul

    2014-10-01

    The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload flew for the first time on 2012 November 2, producing the first focused images of the Sun above 5 keV. To enable hard X-ray (HXR) imaging spectroscopy via direct focusing, FOXSI makes use of grazing-incidence replicated optics combined with fine-pitch solid-state detectors. On its first flight, FOXSI observed several targets that included active regions, the quiet Sun, and a GOES-class B2.7 microflare. This Letter provides an introduction to the FOXSI instrument and presents its first solar image. These data demonstrate the superiority in sensitivity and dynamic range that is achievable with amore » direct HXR imager with respect to previous, indirect imaging methods, and illustrate the technological readiness for a spaceborne mission to observe HXRs from solar flares via direct focusing optics.« less

  1. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  2. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    PubMed

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  3. Internet Color Imaging

    DTIC Science & Technology

    2000-07-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1348 TITLE: Internet Color Imaging DISTRIBUTION: Approved for public...Paper Internet Color Imaging Hsien-Che Lee Imaging Science and Technology Laboratory Eastman Kodak Company, Rochester, New York 14650-1816 USA...ABSTRACT The sharing and exchange of color images over the Internet pose very challenging problems to color science and technology . Emerging color standards

  4. Galaxy of Images

    Science.gov Websites

    This site has moved! Please go to our new Image Gallery site! dot header Basic Image Search Options dot header Search Tips Enter a keyword term below: Submit Use this search to find ANY words you Irish Lion Cubs Taxonomic (Scientific) Keyword Search: Submit Many of the images in the Galaxy of Images

  5. Fourier domain image fusion for differential X-ray phase-contrast breast imaging.

    PubMed

    Coello, Eduardo; Sperl, Jonathan I; Bequé, Dirk; Benz, Tobias; Scherer, Kai; Herzen, Julia; Sztrókay-Gaul, Anikó; Hellerhoff, Karin; Pfeiffer, Franz; Cozzini, Cristina; Grandl, Susanne

    2017-04-01

    X-Ray Phase-Contrast (XPC) imaging is a novel technology with a great potential for applications in clinical practice, with breast imaging being of special interest. This work introduces an intuitive methodology to combine and visualize relevant diagnostic features, present in the X-ray attenuation, phase shift and scattering information retrieved in XPC imaging, using a Fourier domain fusion algorithm. The method allows to present complementary information from the three acquired signals in one single image, minimizing the noise component and maintaining visual similarity to a conventional X-ray image, but with noticeable enhancement in diagnostic features, details and resolution. Radiologists experienced in mammography applied the image fusion method to XPC measurements of mastectomy samples and evaluated the feature content of each input and the fused image. This assessment validated that the combination of all the relevant diagnostic features, contained in the XPC images, was present in the fused image as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2017-02-15

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks. Freely available extension to ImageJ2 ( http://imagej.net/Downloads ). Installation and use instructions available at http://imagej.net/MATLAB_Scripting. Tested with ImageJ 2.0.0-rc-54 , Java 1.8.0_66 and MATLAB R2015b. eliceiri@wisc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. A novel augmented reality system of image projection for image-guided neurosurgery.

    PubMed

    Mahvash, Mehran; Besharati Tabrizi, Leila

    2013-05-01

    Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.

  8. Split image optical display

    DOEpatents

    Veligdan, James T.

    2005-05-31

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  9. Split image optical display

    DOEpatents

    Veligdan, James T [Manorville, NY

    2007-05-29

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  10. Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager

    DTIC Science & Technology

    2014-03-01

    quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS

  11. Workshop on Body Image: Creating or Reinventing a Positive Body Image.

    ERIC Educational Resources Information Center

    Ahmed, Christine

    This paper examines the culturization of body image and the impact of body image on women and men, noting that the strict definition of body size has made many women and men dissatisfied with their bodies. The first section defines body image and culturization, explaining how the current media images put tremendous pressure on men and women that…

  12. Breast imaging with the SoftVue imaging system: first results

    NASA Astrophysics Data System (ADS)

    Duric, Neb; Littrup, Peter; Schmidt, Steven; Li, Cuiping; Roy, Olivier; Bey-Knight, Lisa; Janer, Roman; Kunz, Dave; Chen, Xiaoyang; Goll, Jeffrey; Wallen, Andrea; Zafar, Fouzaan; Allada, Veerendra; West, Erik; Jovanovic, Ivana; Li, Kuo; Greenway, William

    2013-03-01

    For women with dense breast tissue, who are at much higher risk for developing breast cancer, the performance of mammography is at its worst. Consequently, many early cancers go undetected when they are the most treatable. Improved cancer detection for women with dense breasts would decrease the proportion of breast cancers diagnosed at later stages, which would significantly lower the mortality rate. The emergence of whole breast ultrasound provides good performance for women with dense breast tissue, and may eliminate the current trade-off between the cost effectiveness of mammography and the imaging performance of more expensive systems such as magnetic resonance imaging. We report on the performance of SoftVue, a whole breast ultrasound imaging system, based on the principles of ultrasound tomography. SoftVue was developed by Delphinus Medical Technologies and builds on an early prototype developed at the Karmanos Cancer Institute. We present results from preliminary testing of the SoftVue system, performed both in the lab and in the clinic. These tests aimed to validate the expected improvements in image performance. Initial qualitative analyses showed major improvements in image quality, thereby validating the new imaging system design. Specifically, SoftVue's imaging performance was consistent across all breast density categories and had much better resolution and contrast. The implications of these results for clinical breast imaging are discussed and future work is described.

  13. Images.

    ERIC Educational Resources Information Center

    Christensen, Rosemary Ackley

    The packet of visual images, designed by Ojibwe artist Steven Premo, is intended to provide teachers of Indian students with contemporary, positive, non-stereotypical images of native cultures, particularly Indian women, that can be used in all classes for any aged student to assist in increasing the self-esteem of Indian children and help raise…

  14. Polarization imaging apparatus

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin Kevin (Inventor); Chen, Qiushui (Inventor); Zhao, Hongzhi (Inventor)

    2010-01-01

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set 11, a linear polarizer 14 with its optical axis 18, a first variable phase retarder 12 with its optical axis 16 aligned 22.5.degree. to axis 18, a second variable phase retarder 13 with its optical axis 17 aligned 45.degree. to axis 18, a imaging sensor 15 for sensing the intensity images of the sample, a controller 101 and a computer 102. Two variable phase retarders 12 and 13 were controlled independently by a computer 102 through a controller unit 101 which generates a sequential of voltages to control the phase retardations of VPRs 12 and 13. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor 15 when the phase retardations of VPRs 12 and 13 were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.

  15. Low dose CT image restoration using a database of image patches

    NASA Astrophysics Data System (ADS)

    Ha, Sungsoo; Mueller, Klaus

    2015-01-01

    Reducing the radiation dose in CT imaging has become an active research topic and many solutions have been proposed to remove the significant noise and streak artifacts in the reconstructed images. Most of these methods operate within the domain of the image that is subject to restoration. This, however, poses limitations on the extent of filtering possible. We advocate to take into consideration the vast body of external knowledge that exists in the domain of already acquired medical CT images, since after all, this is what radiologists do when they examine these low quality images. We can incorporate this knowledge by creating a database of prior scans, either of the same patient or a diverse corpus of different patients, to assist in the restoration process. Our paper follows up on our previous work that used a database of images. Using images, however, is challenging since it requires tedious and error prone registration and alignment. Our new method eliminates these problems by storing a diverse set of small image patches in conjunction with a localized similarity matching scheme. We also empirically show that it is sufficient to store these patches without anatomical tags since their statistics are sufficiently strong to yield good similarity matches from the database and as a direct effect, produce image restorations of high quality. A final experiment demonstrates that our global database approach can recover image features that are difficult to preserve with conventional denoising approaches.

  16. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Ghost image in enhanced self-heterodyne synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Zhang, Guo; Sun, Jianfeng; Zhou, Yu; Lu, Zhiyong; Li, Guangyuan; Xu, Mengmeng; Zhang, Bo; Lao, Chenzhe; He, Hongyu

    2018-03-01

    The enhanced self-heterodyne synthetic aperture imaging ladar (SAIL) self-heterodynes two polarization-orthogonal echo signals to eliminate the phase disturbance caused by atmospheric turbulence and mechanical trembling, uses heterodyne receiver instead of self-heterodyne receiver to improve signal-to-noise ratio. The principle and structure of the enhanced self-heterodyne SAIL are presented. The imaging process of enhanced self-heterodyne SAIL for distributed target is also analyzed. In enhanced self-heterodyne SAIL, the phases of two orthogonal-polarization beams are modulated by four cylindrical lenses in transmitter to improve resolutions in orthogonal direction and travel direction, which will generate ghost image. The generation process of ghost image in enhanced self-heterodyne SAIL is mathematically detailed, and a method of eliminating ghost image is also presented, which is significant for far-distance imaging. A number of experiments of enhanced self-heterodyne SAIL for distributed target are presented, these experimental results verify the theoretical analysis of enhanced self-heterodyne SAIL. The enhanced self-heterodyne SAIL has the capability to eliminate the influence from the atmospheric turbulence and mechanical trembling, has high advantage in detecting weak signals, and has promising application for far-distance ladar imaging.

  18. Image fusion and navigation platforms for percutaneous image-guided interventions.

    PubMed

    Rajagopal, Manoj; Venkatesan, Aradhana M

    2016-04-01

    Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.

  19. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  20. Spectral CT Image Restoration via an Average Image-Induced Nonlocal Means Filter.

    PubMed

    Zeng, Dong; Huang, Jing; Zhang, Hua; Bian, Zhaoying; Niu, Shanzhou; Zhang, Zhang; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-05-01

    Spectral computed tomography (SCT) images reconstructed by an analytical approach often suffer from a poor signal-to-noise ratio and strong streak artifacts when sufficient photon counts are not available in SCT imaging. In reducing noise-induced artifacts in SCT images, in this study, we propose an average image-induced nonlocal means (aviNLM) filter for each energy-specific image restoration.  Methods:  The present aviNLM algorithm exploits redundant information in the whole energy domain. Specifically, the proposed aviNLM algorithm yields the restored results by performing a nonlocal weighted average operation on the noisy energy-specific images with the nonlocal weight matrix between the target and prior images, in which the prior image is generated from all of the images reconstructed in each energy bin.  Results: Qualitative and quantitative studies are conducted to evaluate the aviNLM filter by using the data of digital phantom, physical phantom, and clinical patient data acquired from the energy-resolved and -integrated detectors, respectively. Experimental results show that the present aviNLM filter can achieve promising results for SCT image restoration in terms of noise-induced artifact suppression, cross profile, and contrast-to-noise ratio and material decomposition assessment. Conclusion and Significance: The present aviNLM algorithm has useful potential for radiation dose reduction by lowering the mAs in SCT imaging, and it may be useful for some other clinical applications, such as in myocardial perfusion imaging and radiotherapy.

  1. Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.

    PubMed

    Kahn, Charles E

    2008-09-01

    Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.

  2. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    PubMed

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. [Imaging center - optimization of the imaging process].

    PubMed

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  4. BMC Ecology image competition 2014: the winning images

    PubMed Central

    2014-01-01

    BMC Ecology showcases the winning entries from its second Ecology Image Competition. More than 300 individual images were submitted from an international array of research scientists, depicting life on every continent on earth. The journal’s Editorial Board and guest judge Caspar Henderson outline why their winning selections demonstrated high levels of technical skill and aesthetic sense in depicting the science of ecology, and we also highlight a small selection of highly commended images that we simply couldn’t let you miss out on. PMID:25178017

  5. USB video image controller used in CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Wenxuan; Wang, Yuxia; Fan, Hong

    2002-09-01

    CMOS process is mainstream technique in VLSI, possesses high integration. SE402 is multifunction microcontroller, which integrates image data I/O ports, clock control, exposure control and digital signal processing into one chip. SE402 reduces the number of chips and PCB's room. The paper studies emphatically on USB video image controller used in CMOS image sensor and give the application on digital still camera.

  6. What's New | Galaxy of Images

    Science.gov Websites

    ] View Images Details ID: SIL32-035-02 Enlarge Image View Images Details ID: SIL32-038-02 Enlarge Image View Images Details ID: SIL-2004_CT_6_1 Enlarge Image View Images Details ID: SIL32-010-01 Enlarge Image View Images Details ID: SIL32-013-05 Enlarge Image View Images Details ID: SIL32-014-02 Enlarge

  7. Toward image phylogeny forests: automatically recovering semantically similar image relationships.

    PubMed

    Dias, Zanoni; Goldenstein, Siome; Rocha, Anderson

    2013-09-10

    In the past few years, several near-duplicate detection methods appeared in the literature to identify the cohabiting versions of a given document online. Following this trend, there are some initial attempts to go beyond the detection task, and look into the structure of evolution within a set of related images overtime. In this paper, we aim at automatically identify the structure of relationships underlying the images, correctly reconstruct their past history and ancestry information, and group them in distinct trees of processing history. We introduce a new algorithm that automatically handles sets of images comprising different related images, and outputs the phylogeny trees (also known as a forest) associated with them. Image phylogeny algorithms have many applications such as finding the first image within a set posted online (useful for tracking copyright infringement perpetrators), hint at child pornography content creators, and narrowing down a list of suspects for online harassment using photographs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Medical imaging, PACS, and imaging informatics: retrospective.

    PubMed

    Huang, H K

    2014-01-01

    Historical reviews of PACS (picture archiving and communication system) and imaging informatics development from different points of view have been published in the past (Huang in Euro J Radiol 78:163-176, 2011; Lemke in Euro J Radiol 78:177-183, 2011; Inamura and Jong in Euro J Radiol 78:184-189, 2011). This retrospective attempts to look at the topic from a different angle by identifying certain basic medical imaging inventions in the 1960s and 1970s which had conceptually defined basic components of PACS guiding its course of development in the 1980s and 1990s, as well as subsequent imaging informatics research in the 2000s. In medical imaging, the emphasis was on the innovations at Georgetown University in Washington, DC, in the 1960s and 1970s. During the 1980s and 1990s, research and training support from US government agencies and public and private medical imaging manufacturers became available for training of young talents in biomedical physics and for developing the key components required for PACS development. In the 2000s, computer hardware and software as well as communication networks advanced by leaps and bounds, opening the door for medical imaging informatics to flourish. Because many key components required for the PACS operation were developed by the UCLA PACS Team and its collaborative partners in the 1980s, this presentation is centered on that aspect. During this period, substantial collaborative research efforts by many individual teams in the US and in Japan were highlighted. Credits are due particularly to the Pattern Recognition Laboratory at Georgetown University, and the computed radiography (CR) development at the Fuji Electric Corp. in collaboration with Stanford University in the 1970s; the Image Processing Laboratory at UCLA in the 1980s-1990s; as well as the early PACS development at the Hokkaido University, Sapporo, Japan, in the late 1970s, and film scanner and digital radiography developed by Konishiroku Photo Ind. Co. Ltd

  9. Interventional Molecular Imaging.

    PubMed

    Solomon, Stephen B; Cornelis, Francois

    2016-04-01

    Although molecular imaging has had a dramatic impact on diagnostic imaging, it has only recently begun to be integrated into interventional procedures. Its significant impact is attributed to its ability to provide noninvasive, physiologic information that supplements conventional morphologic imaging. The four major interventional opportunities for molecular imaging are, first, to provide guidance to localize a target; second, to provide tissue analysis to confirm that the target has been reached; third, to provide in-room, posttherapy assessment; and fourth, to deliver targeted therapeutics. With improved understanding and application of(18)F-FDG, as well as the addition of new molecular probes beyond(18)F-FDG, the future holds significant promise for the expansion of molecular imaging into the realm of interventional procedures. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  10. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, X; Chang, J

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thusmore » the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.« less

  11. Integral imaging with multiple image planes using a uniaxial crystal plate.

    PubMed

    Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho

    2003-08-11

    Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.

  12. Synchrotron radiation imaging is a powerful tool to image brain microvasculature.

    PubMed

    Zhang, Mengqi; Peng, Guanyun; Sun, Danni; Xie, Yuanyuan; Xia, Jian; Long, Hongyu; Hu, Kai; Xiao, Bo

    2014-03-01

    Synchrotron radiation (SR) imaging is a powerful experimental tool for micrometer-scale imaging of microcirculation in vivo. This review discusses recent methodological advances and findings from morphological investigations of cerebral vascular networks during several neurovascular pathologies. In particular, it describes recent developments in SR microangiography for real-time assessment of the brain microvasculature under various pathological conditions in small animal models. It also covers studies that employed SR-based phase-contrast imaging to acquire 3D brain images and provide detailed maps of brain vasculature. In addition, a brief introduction of SR technology and current limitations of SR sources are described in this review. In the near future, SR imaging could transform into a common and informative imaging modality to resolve subtle details of cerebrovascular function.

  13. Digital imaging with solid state x-ray image intensifiers

    NASA Astrophysics Data System (ADS)

    Damento, Michael A.; Radspinner, Rachel; Roehrig, Hans

    1999-10-01

    X-ray cameras in which a CCD is lens coupled to a large phosphor screen are known to suffer from a loss of x-ray signal due to poor light collection from conventional phosphors, making them unsuitable for most medical imaging applications. By replacing the standard phosphor with a solid-state image intensifier, it may be possible to improve the signal-to-noise ratio of the images produced with these cameras. The solid-state x-ray image intensifier is a multi- layer device in which a photoconductor layer controls the light output from an electroluminescent phosphor layer. While prototype devices have been used for direct viewing and video imaging, they are only now being evaluated in a digital imaging system. In the present work, the preparation and evaluation of intensifiers with a 65 mm square format are described. The intensifiers are prepared by screen- printing or doctor blading the following layers onto an ITO coated glass substrate: ZnS phosphor, opaque layer, CdS photoconductor, and carbon conductor. The total thickness of the layers is approximately 350 micrometers , 350 VAC at 400 Hz is applied to the device for operation. For a given x-ray dose, the intensifiers produce up to three times the intensity (after background subtracting) of Lanex Fast Front screens. X-ray images produced with the present intensifiers are somewhat noisy and their resolution is about half that of Lanex screens. Modifications are suggested which could improve the resolution and noise of the intensifiers.

  14. Hemorrhage detection in MRI brain images using images features

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  15. Postprocessing classification images

    NASA Technical Reports Server (NTRS)

    Kan, E. P.

    1979-01-01

    Program cleans up remote-sensing maps. It can be used with existing image-processing software. Remapped images closely resemble familiar resource information maps and can replace or supplement classification images not postprocessed by this program.

  16. Device for wavelength-selective imaging

    DOEpatents

    Frangioni, John V.

    2010-09-14

    An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.

  17. SlideJ: An ImageJ plugin for automated processing of whole slide images

    PubMed Central

    Baroni, Giulia L.; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images—up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations. PMID:28683129

  18. Imaging quality evaluation method of pixel coupled electro-optical imaging system

    NASA Astrophysics Data System (ADS)

    He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui

    2017-09-01

    With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.

  19. Location-Driven Image Retrieval for Images Collected by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji

    Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.

  20. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  2. Attractive celebrity and peer images on Instagram: Effect on women's mood and body image.

    PubMed

    Brown, Zoe; Tiggemann, Marika

    2016-12-01

    A large body of research has documented that exposure to images of thin fashion models contributes to women's body dissatisfaction. The present study aimed to experimentally investigate the impact of attractive celebrity and peer images on women's body image. Participants were 138 female undergraduate students who were randomly assigned to view either a set of celebrity images, a set of equally attractive unknown peer images, or a control set of travel images. All images were sourced from public Instagram profiles. Results showed that exposure to celebrity and peer images increased negative mood and body dissatisfaction relative to travel images, with no significant difference between celebrity and peer images. This effect was mediated by state appearance comparison. In addition, celebrity worship moderated an increased effect of celebrity images on body dissatisfaction. It was concluded that exposure to attractive celebrity and peer images can be detrimental to women's body image. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.

    2017-09-01

    It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.

  4. Electronic Imaging

    DTIC Science & Technology

    1991-11-01

    Tilted Rough Disc," Donald J. Schertler and Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George...Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse Bryan J. Stossel Responses Nicholas George z 0 zw V) w LU 0...number of impulses present in the degradation. IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSESt Bryan J. Stossel Nicholas George Institute of Optics

  5. Blind image fusion for hyperspectral imaging with the directional total variation

    NASA Astrophysics Data System (ADS)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  6. Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)

    2018-01-01

    A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.

  7. Image Restoration for Fluorescence Planar Imaging with Diffusion Model

    PubMed Central

    Gong, Yuzhu; Li, Yang

    2017-01-01

    Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843

  8. Multi-layer imager design for mega-voltage spectral imaging

    NASA Astrophysics Data System (ADS)

    Myronakis, Marios; Hu, Yue-Houng; Fueglistaller, Rony; Wang, Adam; Baturin, Paul; Huber, Pascal; Morf, Daniel; Star-Lack, Josh; Berbeco, Ross

    2018-05-01

    The architecture of multi-layer imagers (MLIs) can be exploited to provide megavoltage spectral imaging (MVSPI) for specific imaging tasks. In the current work, we investigated bone suppression and gold fiducial contrast enhancement as two clinical tasks which could be improved with spectral imaging. A method based on analytical calculations that enables rapid investigation of MLI component materials and thicknesses was developed and validated against Monte Carlo computations. The figure of merit for task-specific imaging performance was the contrast-to-noise ratio (CNR) of the gold fiducial when the CNR of bone was equal to zero after a weighted subtraction of the signals obtained from each MLI layer. Results demonstrated a sharp increase in the CNR of gold when the build-up component or scintillation materials and thicknesses were modified. The potential for low-cost, prompt implementation of specific modifications (e.g. composition of the build-up component) could accelerate clinical translation of MVSPI.

  9. Voyager image processing at the Image Processing Laboratory

    NASA Astrophysics Data System (ADS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-09-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  10. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  11. Image-guided urologic surgery: intraoperative optical imaging and tissue interrogation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liao, Joseph C.

    2017-02-01

    Emerging optical imaging technologies can be integrated in the operating room environment during minimally invasive and open urologic surgery, including oncologic surgery of the bladder, prostate, and kidney. These technologies include macroscopic fluorescence imaging that provides contrast enhancement between normal and diseased tissue and microscopic imaging that provides tissue characterization. Optical imaging technologies that have reached the clinical arena in urologic surgery are reviewed, including photodynamic diagnosis, near infrared fluorescence imaging, optical coherence tomography, and confocal laser endomicroscopy. Molecular imaging represents an exciting future arena in conjugating cancer-specific contrast agents to fluorophores to improve the specificity of disease detection. Ongoing efforts are underway to translate optimal targeting agents and imaging modalities, with the goal to improve cancer-specific and functional outcomes.

  12. Performance test and image correction of CMOS image sensor in radiation environment

    NASA Astrophysics Data System (ADS)

    Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang

    2016-09-01

    CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.

  13. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

  14. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  15. Superresolution parallel magnetic resonance imaging: Application to functional and spectroscopic imaging

    PubMed Central

    Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan

    2009-01-01

    Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804

  16. Imaging of brain metastases.

    PubMed

    Fink, Kathleen R; Fink, James R

    2013-01-01

    Imaging plays a key role in the diagnosis of central nervous system (CNS) metastasis. Imaging is used to detect metastases in patients with known malignancies and new neurological signs or symptoms, as well as to screen for CNS involvement in patients with known cancer. Computed tomography (CT) and magnetic resonance imaging (MRI) are the key imaging modalities used in the diagnosis of brain metastases. In difficult cases, such as newly diagnosed solitary enhancing brain lesions in patients without known malignancy, advanced imaging techniques including proton magnetic resonance spectroscopy (MRS), contrast enhanced magnetic resonance perfusion (MRP), diffusion weighted imaging (DWI), and diffusion tensor imaging (DTI) may aid in arriving at the correct diagnosis. This image-rich review discusses the imaging evaluation of patients with suspected intracranial involvement and malignancy, describes typical imaging findings of parenchymal brain metastasis on CT and MRI, and provides clues to specific histological diagnoses such as the presence of hemorrhage. Additionally, the role of advanced imaging techniques is reviewed, specifically in the context of differentiating metastasis from high-grade glioma and other solitary enhancing brain lesions. Extra-axial CNS involvement by metastases, including pachymeningeal and leptomeningeal metastases is also briefly reviewed.

  17. Fluorescence lifetime imaging and reflectance confocal microscopy for multiscale imaging of oral precancer

    NASA Astrophysics Data System (ADS)

    Jabbour, Joey M.; Cheng, Shuna; Malik, Bilal H.; Cuenca, Rodrigo; Jo, Javier A.; Wright, John; Cheng, Yi-Shing Lisa; Maitland, Kristen C.

    2013-04-01

    Optical imaging techniques using a variety of contrast mechanisms are under evaluation for early detection of epithelial precancer; however, tradeoffs in field of view (FOV) and resolution may limit their application. Therefore, we present a multiscale multimodal optical imaging system combining macroscopic biochemical imaging of fluorescence lifetime imaging (FLIM) with subcellular morphologic imaging of reflectance confocal microscopy (RCM). The FLIM module images a 16×16 mm2 tissue area with 62.5 μm lateral and 320 ps temporal resolution to guide cellular imaging of suspicious regions. Subsequently, coregistered RCM images are acquired at 7 Hz with 400 μm diameter FOV, <1 μm lateral and 3.5 μm axial resolution. FLIM-RCM imaging was performed on a tissue phantom, normal porcine buccal mucosa, and a hamster cheek pouch model of oral carcinogenesis. While FLIM is sensitive to biochemical and macroscopic architectural changes in tissue, RCM provides images of cell nuclear morphology, all key indicators of precancer progression.

  18. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  19. Magnetic Resonance Imaging

    MedlinePlus

    ... specific information about your own examination. What is magnetic resonance imaging (MRI)? What is MRI used for? How safe ... What is the MRI examination like? What is magnetic resonance imaging (MRI)? MRI, or magnetic resonance imaging, is a ...

  20. Vibration mode imaging.

    PubMed

    Zhang, Xiaoming; Zeraati, Mohammad; Kinnick, Randall R; Greenleaf, James F; Fatemi, Mostafa

    2007-06-01

    A new method for imaging the vibration mode of an object is investigated. The radiation force of ultrasound is used to scan the object at a resonant frequency of the object. The vibration of the object is measured by laser and the resulting acoustic emission from the object is measured by a hydrophone. It is shown that the measured signal is proportional to the value of the mode shape at the focal point of the ultrasound beam. Experimental studies are carried out on a mechanical heart valve and arterial phantoms. The mode images on the valve are made by the hydrophone measurement and confirmed by finite-element method simulations. Compared with conventional B-scan imaging on arterial phantoms, the mode imaging can show not only the interface of the artery and the gelatin, but also the vibration modes of the artery. The images taken on the phantom surface suggest that an image of an interior artery can be made by vibration measurements on the surface of the body. However, the image of the artery can be improved if the vibration of the artery is measured directly. Imaging of the structure in the gelatin or tissue can be enhanced by small bubbles and contrast agents.

  1. Simpler images, better results

    NASA Astrophysics Data System (ADS)

    Chance, Britton

    1999-03-01

    The very rapid development of optical technology has followed a pattern similar to that of nuclear magnetic resonance: first, spectroscopy and then imaging. The accomplishments in spectroscopy have been significant--among them, early detection of hematomas and quantitative oximetry (assuming that time and frequency domain instruments are used). Imaging has progressed somewhat later. The first images were obtained in Japan and USA a few years ago, particularly of parietal stimulation of the human brain. Since then, rapid applications to breast and limb, together with higher resolution of the brain now make NIR imaging of functional activation and tumor detection readily available, reliable and affordable devices. The lecture has to do with the applications of imaging to these three areas, particularly to prefrontal imaging of cognitive function, of breast tumor detection, and of localized muscle activation in exercise. The imaging resolution achievable in functional activation appears to be FWHM of 4 mm. The time required for an image is a few seconds or even much less. Breast image detection at 50 microsecond(s) ec/pixel results in images obtainable in a few seconds or shorter times (bandwidths of the kHz are available). Finally, imaging of the body organs is under study in this laboratory, particularly in the in utero fetus. It appears that the photon migration theory now leads to the development of a wide number of images for human subject tissue spectroscopy and imaging.

  2. MESSENGER Final Image

    NASA Image and Video Library

    2015-04-30

    Today, the MESSENGER spacecraft sent its final image. Originally planned to orbit Mercury for one year, the mission exceeded all expectations, lasting for over four years and acquiring extensive datasets with its seven scientific instruments and radio science investigation. This afternoon, the spacecraft succumbed to the pull of solar gravity and impacted Mercury's surface. The image shown here is the last one acquired and transmitted back to Earth by the mission. The image is located within the floor of the 93-kilometer-diameter crater Jokai. The spacecraft struck the planet just north of Shakespeare basin. Date acquired: April 30, 2015 Image Mission Elapsed Time (MET): 72716050 Image ID: 8422953 Instrument: Narrow Angle Camera (NAC) of the Mercury Dual Imaging System (MDIS) Center Latitude: 72.0° Center Longitude: 223.8° E Resolution: 2.1 meters/pixel Scale: This image is about 1 kilometers (0.6 miles) across Incidence Angle: 57.9° Emission Angle: 56.5° Phase Angle: 40.7° http://photojournal.jpl.nasa.gov/catalog/PIA19448

  3. Self-Image--Alien Image: A Bilateral Video Project.

    ERIC Educational Resources Information Center

    Kracsay, Susanne

    1995-01-01

    Describes a project in which Austrian and Hungarian students learned how people see each other by creating video pictures and letters of their neighbors (alien images) that were returned with corrections (self-images). Discussion includes student critiques, impressions, and misconceptions. (AEF)

  4. Calibration Image of Earth by Mars Color Imager

    NASA Image and Video Library

    2005-08-22

    Three days after the Mars Reconnaissance Orbiter Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon.

  5. Multi-institutional MicroCT image comparison of image-guided small animal irradiators

    NASA Astrophysics Data System (ADS)

    Johnstone, Chris D.; Lindsay, Patricia; E Graves, Edward; Wong, Eugene; Perez, Jessica R.; Poirier, Yannick; Ben-Bouchta, Youssef; Kanesalingam, Thilakshan; Chen, Haijian; E Rubinstein, Ashley; Sheng, Ke; Bazalova-Carter, Magdalena

    2017-07-01

    To recommend imaging protocols and establish tolerance levels for microCT image quality assurance (QA) performed on conformal image-guided small animal irradiators. A fully automated QA software SAPA (small animal phantom analyzer) for image analysis of the commercial Shelley micro-CT MCTP 610 phantom was developed, in which quantitative analyses of CT number linearity, signal-to-noise ratio (SNR), uniformity and noise, geometric accuracy, spatial resolution by means of modulation transfer function (MTF), and CT contrast were performed. Phantom microCT scans from eleven institutions acquired with four image-guided small animal irradiator units (including the commercial PXi X-RAD SmART and Xstrahl SARRP systems) with varying parameters used for routine small animal imaging were analyzed. Multi-institutional data sets were compared using SAPA, based on which tolerance levels for each QA test were established and imaging protocols for QA were recommended. By analyzing microCT data from 11 institutions, we established image QA tolerance levels for all image quality tests. CT number linearity set to R 2  >  0.990 was acceptable in microCT data acquired at all but three institutions. Acceptable SNR  >  36 and noise levels  <55 HU were obtained at five of the eleven institutions, where failing scans were acquired with current-exposure time of less than 120 mAs. Acceptable spatial resolution (>1.5 lp mm-1 for MTF  =  0.2) was obtained at all but four institutions due to their large image voxel size used (>0.275 mm). Ten of the eleven institutions passed the set QA tolerance for geometric accuracy (<1.5%) and nine of the eleven institutions passed the QA tolerance for contrast (>2000 HU for 30 mgI ml-1). We recommend performing imaging QA with 70 kVp, 1.5 mA, 120 s imaging time, 0.20 mm voxel size, and a frame rate of 5 fps for the PXi X-RAD SmART. For the Xstrahl SARRP, we recommend using 60 kVp, 1.0 mA, 240 s imaging time, 0.20

  6. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    PubMed

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  7. Synchrotron radiation imaging is a powerful tool to image brain microvasculature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Mengqi; Sun, Danni; Xie, Yuanyuan

    2014-03-15

    Synchrotron radiation (SR) imaging is a powerful experimental tool for micrometer-scale imaging of microcirculation in vivo. This review discusses recent methodological advances and findings from morphological investigations of cerebral vascular networks during several neurovascular pathologies. In particular, it describes recent developments in SR microangiography for real-time assessment of the brain microvasculature under various pathological conditions in small animal models. It also covers studies that employed SR-based phase-contrast imaging to acquire 3D brain images and provide detailed maps of brain vasculature. In addition, a brief introduction of SR technology and current limitations of SR sources are described in this review. Inmore » the near future, SR imaging could transform into a common and informative imaging modality to resolve subtle details of cerebrovascular function.« less

  8. Correlation Plenoptic Imaging.

    PubMed

    D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-03

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.

  9. Correlation Plenoptic Imaging

    NASA Astrophysics Data System (ADS)

    D'Angelo, Milena; Pepe, Francesco V.; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-01

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.

  10. Earth Observing-1 Advanced Land Imager: Imaging Performance On-Orbit

    NASA Technical Reports Server (NTRS)

    Hearn, D. R.

    2002-01-01

    This report analyzes the on-orbit imaging performance of the Advanced Land Imager (ALI) on the Earth Observing-1 satellite. The pre-flight calibrations are first summarized. The methods used to reconstruct and geometrically correct the image data from this push-broom sensor are described. The method used here does not refer to the position and attitude telemetry from the spacecraft. Rather, it is assumed that the image of the scene moves across the focal plane with a constant velocity, which can be ascertained from the image data itself. Next, an assortment of the images so reconstructed is presented. Color images sharpened with the 10-m panchromatic band data are shown, and the algorithm for producing them from the 30-m multispectral data is described. The approach taken for assessing spatial resolution is to compare the sharpness of features in the on-orbit image data with profiles predicted on the basis of the pre-flight calibrations. A large assortment of bridge profiles is analyzed, and very good fits to the predicted shapes are obtained. Lunar calibration scans are analyzed to examine the sharpness of the edge-spread function at the limb of the moon. The darkness of the space beyond the limb is better for this purpose than anything that could be simulated on the ground. From these scans, we find clear evidence of scattering in the optical system, as well as some weak ghost images. Scans of planets and stars are also analyzed. Stars are useful point sources of light at all wavelengths, and delineate the point-spread functions of the system. From a quarter-speed scan over the Pleiades, we find that the ALI can detect 6th magnitude stars. The quality of the reconstructed images verifies the capability of the ALI to produce Landsat-type multi spectral data. The signal-to-noise and panchromatic spatial resolution are considerably superior to those of the existing Landsat sensors. The spatial resolution is confirmed to be as good as it was designed to be.

  11. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in

  12. Reduced reference image quality assessment via sub-image similarity based redundancy measurement

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Xue, Wufeng; Zhang, Lei

    2012-03-01

    The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.

  13. High-resolution ophthalmic imaging system

    DOEpatents

    Olivier, Scot S.; Carrano, Carmen J.

    2007-12-04

    A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.

  14. Medical Imaging.

    ERIC Educational Resources Information Center

    Barker, M. C. J.

    1996-01-01

    Discusses four main types of medical imaging (x-ray, radionuclide, ultrasound, and magnetic resonance) and considers their relative merits. Describes important recent and possible future developments in image processing. (Author/MKR)

  15. Composition of a dewarped and enhanced document image from two view images.

    PubMed

    Koo, Hyung Il; Kim, Jinho; Cho, Nam Ik

    2009-07-01

    In this paper, we propose an algorithm to compose a geometrically dewarped and visually enhanced image from two document images taken by a digital camera at different angles. Unlike the conventional works that require special equipment or assumptions on the contents of books or complicated image acquisition steps, we estimate the unfolded book or document surface from the corresponding points between two images. For this purpose, the surface and camera matrices are estimated using structure reconstruction, 3-D projection analysis, and random sample consensus-based curve fitting with the cylindrical surface model. Because we do not need any assumption on the contents of books, the proposed method can be applied not only to optical character recognition (OCR), but also to the high-quality digitization of pictures in documents. In addition to the dewarping for a structurally better image, image mosaic is also performed for further improving the visual quality. By finding better parts of images (with less out of focus blur and/or without specular reflections) from either of views, we compose a better image by stitching and blending them. These processes are formulated as energy minimization problems that can be solved using a graph cut method. Experiments on many kinds of book or document images show that the proposed algorithm robustly works and yields visually pleasing results. Also, the OCR rate of the resulting image is comparable to that of document images from a flatbed scanner.

  16. Digital imaging mass spectrometry.

    PubMed

    Bamberger, Casimir; Renz, Uwe; Bamberger, Andreas

    2011-06-01

    Methods to visualize the two-dimensional (2D) distribution of molecules by mass spectrometric imaging evolve rapidly and yield novel applications in biology, medicine, and material surface sciences. Most mass spectrometric imagers acquire high mass resolution spectra spot-by-spot and thereby scan the object's surface. Thus, imaging is slow and image reconstruction remains cumbersome. Here we describe an imaging mass spectrometer that exploits the true imaging capabilities by ion optical means for the time of flight mass separation. The mass spectrometer is equipped with the ASIC Timepix chip as an array detector to acquire the position, mass, and intensity of ions that are imaged by matrix-assisted laser desorption/ionization (MALDI) directly from the target sample onto the detector. This imaging mass spectrometer has a spatial resolving power at the specimen of (84 ± 35) μm with a mass resolution of 45 and locates atoms or organic compounds on a surface area up to ~2 cm(2). Extended laser spots of ~5 mm(2) on structured specimens allows parallel imaging of selected masses. The digital imaging mass spectrometer proves high hit-multiplicity, straightforward image reconstruction, and potential for high-speed readout at 4 kHz or more. This device demonstrates a simple way of true image acquisition like a digital photographic camera. The technology may enable a fast analysis of biomolecular samples in near future.

  17. On combining image-based and ontological semantic dissimilarities for medical image retrieval applications

    PubMed Central

    Kurtz, Camille; Depeursinge, Adrien; Napel, Sandy; Beaulieu, Christopher F.; Rubin, Daniel L.

    2014-01-01

    Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic “soft” prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs

  18. Restoration Of MEX SRC Images For Improved Topography: A New Image Product

    NASA Astrophysics Data System (ADS)

    Duxbury, T. C.

    2012-12-01

    Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived

  19. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation

  20. Digital image transformation and rectification of spacecraft and radar images

    USGS Publications Warehouse

    Wu, S.S.C.

    1985-01-01

    Digital image transformation and rectification can be described in three categories: (1) digital rectification of spacecraft pictures on workable stereoplotters; (2) digital correction of radar image geometry; and (3) digital reconstruction of shaded relief maps and perspective views including stereograms. Digital rectification can make high-oblique pictures workable on stereoplotters that would otherwise not accommodate such extreme tilt angles. It also enables panoramic line-scan geometry to be used to compile contour maps with photogrammetric plotters. Rectifications were digitally processed on both Viking Orbiter and Lander pictures of Mars as well as radar images taken by various radar systems. By merging digital terrain data with image data, perspective and three-dimensional views of Olympus Mons and Tithonium Chasma, also of Mars, are reconstructed through digital image processing. ?? 1985.

  1. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  2. Osteoporosis Imaging: State of the Art and Advanced Imaging

    PubMed Central

    2012-01-01

    Osteoporosis is becoming an increasingly important public health issue, and effective treatments to prevent fragility fractures are available. Osteoporosis imaging is of critical importance in identifying individuals at risk for fractures who would require pharmacotherapy to reduce fracture risk and also in monitoring response to treatment. Dual x-ray absorptiometry is currently the state-of-the-art technique to measure bone mineral density and to diagnose osteoporosis according to the World Health Organization guidelines. Motivated by a 2000 National Institutes of Health consensus conference, substantial research efforts have focused on assessing bone quality by using advanced imaging techniques. Among these techniques aimed at better characterizing fracture risk and treatment effects, high-resolution peripheral quantitative computed tomography (CT) currently plays a central role, and a large number of recent studies have used this technique to study trabecular and cortical bone architecture. Other techniques to analyze bone quality include multidetector CT, magnetic resonance imaging, and quantitative ultrasonography. In addition to quantitative imaging techniques measuring bone density and quality, imaging needs to be used to diagnose prevalent osteoporotic fractures, such as spine fractures on chest radiographs and sagittal multidetector CT reconstructions. Radiologists need to be sensitized to the fact that the presence of fragility fractures will alter patient care, and these fractures need to be described in the report. This review article covers state-of-the-art imaging techniques to measure bone mineral density, describes novel techniques to study bone quality, and focuses on how standard imaging techniques should be used to diagnose prevalent osteoporotic fractures. © RSNA, 2012 PMID:22438439

  3. Robustness of speckle imaging techniques applied to horizontal imaging scenarios

    NASA Astrophysics Data System (ADS)

    Bos, Jeremy P.

    Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction to improve the quality of imagery available to operators. To be effective, these systems must operate over significant variations in turbulence conditions while also subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition to robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods are one of a variety of methods recently been proposed for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. This performance evaluation is made possible using a novel technique for simulating anisoplanatic image formation. I find that incorporate as few as 15 image frames and 4 estimates of the object phase per reconstructed frame provide an average reduction of 45% reduction in Mean Squared Error (MSE) and 68% reduction in deviation in MSE. In addition, the Knox-Thompson phase recovery method is demonstrated to produce images in half the time required by the bispectrum. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate reconstruction quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in

  4. Imaging Active Giants and Comparisons to Doppler Imaging

    NASA Astrophysics Data System (ADS)

    Roettenbacher, Rachael

    2018-04-01

    In the outer layers of cool, giant stars, stellar magnetism stifles convection creating localized starspots, analogous to sunspots. Because they frequently cover much larger regions of the stellar surface than sunspots, starspots of giant stars have been imaged using a variety of techniques to understand, for example, stellar magnetism, differential rotation, and spot evolution. Active giants have been imaged using photometric, spectroscopic, and, only recently, interferometric observations. Interferometry has provided a way to unambiguously see stellar surfaces without the degeneracies experienced by other methods. The only facility presently capable of obtaining the sub-milliarcsecond resolution necessary to not only resolve some giant stars, but also features on their surfaces is the Center for High-Angular Resolution Astronomy (CHARA) Array. Here, an overview will be given of the results of imaging active giants and details on the recent comparisons of simultaneous interferometric and Doppler images.

  5. Multispectral imaging for biometrics

    NASA Astrophysics Data System (ADS)

    Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.

    2005-03-01

    Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.

  6. Mutual conversion between B-mode image and acoustic impedance image

    NASA Astrophysics Data System (ADS)

    Chean, Tan Wei; Hozumi, Naohiro; Yoshida, Sachiko; Kobayashi, Kazuto; Ogura, Yuki

    2017-07-01

    To study the acoustic properties of a B-mode image, two ways of analysis methods were proposed in this report. The first method is the conversion of an acoustic impedance image into a B-mode image (Z to B). The time domain reflectometry theory and transmission line model were used as reference in the calculation. The second method is the direct a conversion of B-mode image into an acoustic impedance image (B to Z). The theoretical background of the second method is similar to that of the first method; however, the calculation is in the opposite direction. Significant scatter, refraction, and attenuation were assumed not to take place during the propagation of an ultrasonic wave. Hence, they were ignored in both calculations. In this study, rat cerebellar tissue and human cheek skin were used to determine the feasibility of the first and second methods respectively. Some good results are obtained and hence both methods showed their possible applications in the study of acoustic properties of B-mode images.

  7. X-ray imaging with amorphous silicon active matrix flat-panel imagers (AMFPIs)

    NASA Astrophysics Data System (ADS)

    El-Mohri, Youcef; Antonuk, Larry E.; Jee, Kyung-Wook; Maolinbay, Manat; Rong, Xiujiang; Siewerdsen, Jeffrey H.; Verma, Manav; Zhao, Qihua

    1997-07-01

    Recent advances in thin-film electronics technology have opened the way for the use of flat-panel imagers in a number of medical imaging applications. These novel imagers offer real time digital readout capabilities (˜30 frames per second), radiation hardness (>106cGy), large area (30×40 cm2) and compactness (˜1 cm). Such qualities make them strong candidates for the replacement of conventional x-ray imaging technologies such as film-screen and image intensifier systems. In this report, qualities and potential of amorphous silicon based active matrix flat-panel imagers are outlined for various applications such as radiation therapy, radiography, fluoroscopy and mammography.

  8. MR Imaging Applications in Mild Traumatic Brain Injury: An Imaging Update

    PubMed Central

    Wu, Xin; Kirov, Ivan I.; Gonen, Oded; Ge, Yulin; Grossman, Robert I.

    2016-01-01

    Mild traumatic brain injury (mTBI), also commonly referred to as concussion, affects millions of Americans annually. Although computed tomography is the first-line imaging technique for all traumatic brain injury, it is incapable of providing long-term prognostic information in mTBI. In the past decade, the amount of research related to magnetic resonance (MR) imaging of mTBI has grown exponentially, partly due to development of novel analytical methods, which are applied to a variety of MR techniques. Here, evidence of subtle brain changes in mTBI as revealed by these techniques, which are not demonstrable by conventional imaging, will be reviewed. These changes can be considered in three main categories of brain structure, function, and metabolism. Macrostructural and microstructural changes have been revealed with three-dimensional MR imaging, susceptibility-weighted imaging, diffusion-weighted imaging, and higher order diffusion imaging. Functional abnormalities have been described with both task-mediated and resting-state blood oxygen level–dependent functional MR imaging. Metabolic changes suggesting neuronal injury have been demonstrated with MR spectroscopy. These findings improve understanding of the true impact of mTBI and its pathogenesis. Further investigation may eventually lead to improved diagnosis, prognosis, and management of this common and costly condition. © RSNA, 2016 PMID:27183405

  9. Imaging properties and its improvements of scanning/imaging x-ray microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeuchi, Akihisa, E-mail: take@spring8.or.jp; Uesugi, Kentaro; Suzuki, Yoshio

    A scanning / imaging X-ray microscope (SIXM) system has been developed at SPring-8. The SIXM consists of a scanning X-ray microscope with a one-dimensional (1D) X-ray focusing device and an imaging (full-field) X-ray microscope with a 1D X-ray objective. The motivation of the SIXM system is to realize a quantitative and highly-sensitive multimodal 3D X-ray tomography by taking advantages of both the scanning X-ray microscope using multi-pixel detector and the imaging X-ray microscope. Data acquisition process of a 2D image is completely different between in the horizontal direction and in the vertical direction; a 1D signal is obtained with themore » linear-scanning while the other dimensional signal is obtained with the imaging optics. Such condition have caused a serious problem on the imaging properties that the imaging quality in the vertical direction has been much worse than that in the horizontal direction. In this paper, two approaches to solve this problem will be presented. One is introducing a Fourier transform method for phase retrieval from one phase derivative image, and the other to develop and employ a 1D diffuser to produce an asymmetrical coherent illumination.« less

  10. Review of Image Quality Measures for Solar Imaging

    NASA Astrophysics Data System (ADS)

    Popowicz, Adam; Radlak, Krystian; Bernacki, Krzysztof; Orlov, Valeri

    2017-12-01

    Observations of the solar photosphere from the ground encounter significant problems caused by Earth's turbulent atmosphere. Before image reconstruction techniques can be applied, the frames obtained in the most favorable atmospheric conditions (the so-called lucky frames) have to be carefully selected. However, estimating the quality of images containing complex photospheric structures is not a trivial task, and the standard routines applied in nighttime lucky imaging observations are not applicable. In this paper we evaluate 36 methods dedicated to the assessment of image quality, which were presented in the literature over the past 40 years. We compare their effectiveness on simulated solar observations of both active regions and granulation patches, using reference data obtained by the Solar Optical Telescope on the Hinode satellite. To create images that are affected by a known degree of atmospheric degradation, we employed the random wave vector method, which faithfully models all the seeing characteristics. The results provide useful information about the method performances, depending on the average seeing conditions expressed by the ratio of the telescope's aperture to the Fried parameter, D/r0. The comparison identifies three methods for consideration by observers: Helmli and Scherer's mean, the median filter gradient similarity, and the discrete cosine transform energy ratio. While the first method requires less computational effort and can be used effectively in virtually any atmospheric conditions, the second method shows its superiority at good seeing (D/r0<4). The third method should mainly be considered for the post-processing of strongly blurred images.

  11. Dual-modality imaging

    NASA Astrophysics Data System (ADS)

    Hasegawa, Bruce; Tang, H. Roger; Da Silva, Angela J.; Wong, Kenneth H.; Iwata, Koji; Wu, Max C.

    2001-09-01

    In comparison to conventional medical imaging techniques, dual-modality imaging offers the advantage of correlating anatomical information from X-ray computed tomography (CT) with functional measurements from single-photon emission computed tomography (SPECT) or with positron emission tomography (PET). The combined X-ray/radionuclide images from dual-modality imaging can help the clinician to differentiate disease from normal uptake of radiopharmaceuticals, and to improve diagnosis and staging of disease. In addition, phantom and animal studies have demonstrated that a priori structural information from CT can be used to improve quantification of tissue uptake and organ function by correcting the radionuclide data for errors due to photon attenuation, partial volume effects, scatter radiation, and other physical effects. Dual-modality imaging therefore is emerging as a method of improving the visual quality and the quantitative accuracy of radionuclide imaging for diagnosis of patients with cancer and heart disease.

  12. Molecular Imaging in Synthetic Biology, and Synthetic Biology in Molecular Imaging.

    PubMed

    Gilad, Assaf A; Shapiro, Mikhail G

    2017-06-01

    Biomedical synthetic biology is an emerging field in which cells are engineered at the genetic level to carry out novel functions with relevance to biomedical and industrial applications. This approach promises new treatments, imaging tools, and diagnostics for diseases ranging from gastrointestinal inflammatory syndromes to cancer, diabetes, and neurodegeneration. As these cellular technologies undergo pre-clinical and clinical development, it is becoming essential to monitor their location and function in vivo, necessitating appropriate molecular imaging strategies, and therefore, we have created an interest group within the World Molecular Imaging Society focusing on synthetic biology and reporter gene technologies. Here, we highlight recent advances in biomedical synthetic biology, including bacterial therapy, immunotherapy, and regenerative medicine. We then discuss emerging molecular imaging approaches to facilitate in vivo applications, focusing on reporter genes for noninvasive modalities such as magnetic resonance, ultrasound, photoacoustic imaging, bioluminescence, and radionuclear imaging. Because reporter genes can be incorporated directly into engineered genetic circuits, they are particularly well suited to imaging synthetic biological constructs, and developing them provides opportunities for creative molecular and genetic engineering.

  13. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh

    2009-01-01

    Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent

  14. Imaging through Fog Using Polarization Imaging in the Visible/NIR/SWIR Spectrum

    DTIC Science & Technology

    2017-01-11

    few haze effects as possible.  One post processing step on the image in order to complete image dehazing Figure 6: Basic architecture of the...Page 16 Figure 7: Basic architecture of post-processing techniques to recover an image dehazed from a raw image This first study was limited on the

  15. Social image quality

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Kheiri, Ahmed

    2011-01-01

    Current subjective image quality assessments have been developed in the laboratory environments, under controlledconditions, and are dependent on the participation of limited numbers of observers. In this research, with the help of Web 2.0 and social media technology, a new method for building a subjective image quality metric has been developed where the observers are the Internet users. A website with a simple user interface that enables Internet users from anywhere at any time to vote for a better quality version of a pair of the same image has been constructed. Users' votes are recorded and used to rank the images according to their perceived visual qualities. We have developed three rank aggregation algorithms to process the recorded pair comparison data, the first uses a naive approach, the second employs a Condorcet method, and the third uses the Dykstra's extension of Bradley-Terry method. The website has been collecting data for about three months and has accumulated over 10,000 votes at the time of writing this paper. Results show that the Internet and its allied technologies such as crowdsourcing offer a promising new paradigm for image and video quality assessment where hundreds of thousands of Internet users can contribute to building more robust image quality metrics. We have made Internet user generated social image quality (SIQ) data of a public image database available online (http://www.hdri.cs.nott.ac.uk/siq/) to provide the image quality research community with a new source of ground truth data. The website continues to collect votes and will include more public image databases and will also be extended to include videos to collect social video quality (SVQ) data. All data will be public available on the website in due course.

  16. Global image analysis to determine suitability for text-based image personalization

    NASA Astrophysics Data System (ADS)

    Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.

    2012-03-01

    Lately, image personalization is becoming an interesting topic. Images with variable elements such as text usually appear much more appealing to the recipients. In this paper, we describe a method to pre-analyze the image and automatically suggest to the user the most suitable regions within an image for text-based personalization. The method is based on input gathered from experiments conducted with professional designers. It has been observed that regions that are spatially smooth and regions with existing text (e.g. signage, banners, etc.) are the best candidates for personalization. This gives rise to two sets of corresponding algorithms: one for identifying smooth areas, and one for locating text regions. Furthermore, based on the smooth and text regions found in the image, we derive an overall metric to rate the image in terms of its suitability for personalization (SFP).

  17. Prospects for Image Restoration

    NASA Astrophysics Data System (ADS)

    Hunt, B. R.

    Image restoration is the theory and practice of processing an image to correct it for distortions caused by the image formation process. The first efforts in image restoration appeared more than 25 years ago. In this article we review the more recent trends in image restoration and discuss the main directions that are expected to influence the continued evolution of this technology.

  18. About Galaxy of Images

    Science.gov Websites

    This site has moved! Please go to our new Image Gallery site! dot header About the Image Galaxy are added regularly. Statistics about the Galaxy of Images Frequently Asked Questions Image Use Fees Quick Search! Enter a search term and hit the search button to quickly find an image Go The above "

  19. Quantitative assessment of image motion blur in diffraction images of moving biological cells

    NASA Astrophysics Data System (ADS)

    Wang, He; Jin, Changrong; Feng, Yuanming; Qi, Dandan; Sa, Yu; Hu, Xin-Hua

    2016-02-01

    Motion blur (MB) presents a significant challenge for obtaining high-contrast image data from biological cells with a polarization diffraction imaging flow cytometry (p-DIFC) method. A new p-DIFC experimental system has been developed to evaluate the MB and its effect on image analysis using a time-delay-integration (TDI) CCD camera. Diffraction images of MCF-7 and K562 cells have been acquired with different speed-mismatch ratios and compared to characterize MB quantitatively. Frequency analysis of the diffraction images shows that the degree of MB can be quantified by bandwidth variations of the diffraction images along the motion direction. The analytical results were confirmed by the p-DIFC image data acquired at different speed-mismatch ratios and used to validate a method of numerical simulation of MB on blur-free diffraction images, which provides a useful tool to examine the blurring effect on diffraction images acquired from the same cell. These results provide insights on the dependence of diffraction image on MB and allow significant improvement on rapid biological cell assay with the p-DIFC method.

  20. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  1. Progress in 3D imaging and display by integral imaging

    NASA Astrophysics Data System (ADS)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  2. Information content exploitation of imaging spectrometer's images for lossless compression

    NASA Astrophysics Data System (ADS)

    Wang, Jianyu; Zhu, Zhenyu; Lin, Kan

    1996-11-01

    Imaging spectrometer, such as MAIS produces a tremendous volume of image data with up to 5.12 Mbps raw data rate, which needs urgently a real-time, efficient and reversible compression implementation. Between the lossy scheme with high compression ratio and the lossless scheme with high fidelity, we must make our choice based on the particular information content analysis of each imaging spectrometer's image data. In this paper, we present a careful analysis of information-preserving compression of imaging spectrometer MAIS with an entropy and autocorrelation study on the hyperspectral images. First, the statistical information in an actual MAIS image, captured in Marble Bar Australia, is measured with its entropy, conditional entropy, mutual information and autocorrelation coefficients on both spatial dimensions and spectral dimension. With these careful analyses, it is shown that there is high redundancy existing in the spatial dimensions, but the correlation in spectral dimension of the raw images is smaller than expected. The main reason of the nonstationarity on spectral dimension is attributed to the instruments's discrepancy on detector's response and channel's amplification in different spectral bands. To restore its natural correlation, we preprocess the signal in advance. There are two methods to accomplish this requirement: onboard radiation calibration and normalization. A better result can be achieved by the former one. After preprocessing, the spectral correlation increases so high that it contributes much redundancy in addition to spatial correlation. At last, an on-board hardware implementation for the lossless compression is presented with an ideal result.

  3. Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi

    2016-10-01

    In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.

  4. Image forming apparatus

    DOEpatents

    Satoh, Hisao; Haneda, Satoshi; Ikeda, Tadayoshi; Morita, Shizuo; Fukuchi, Masakazu

    1996-01-01

    In an image forming apparatus having a detachable process cartridge in which an image carrier on which an electrostatic latent image is formed, and a developing unit which develops the electrostatic latent image so that a toner image can be formed, both integrally formed into one unit. There is provided a developer container including a discharge section which can be inserted into a supply opening of the developing unit, and a container in which a predetermined amount of developer is contained, wherein the developer container is provided to the toner supply opening of the developing unit and the developer is supplied into the developing unit housing when a toner stirring screw of the developing unit is rotated.

  5. Hybrid imaging: a quantum leap in scientific imaging

    NASA Astrophysics Data System (ADS)

    Atlas, Gene; Wadsworth, Mark V.

    2004-01-01

    ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.

  6. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  7. Improve Image Quality of Transversal Relaxation Time PROPELLER and FLAIR on Magnetic Resonance Imaging

    NASA Astrophysics Data System (ADS)

    Rauf, N.; Alam, D. Y.; Jamaluddin, M.; Samad, B. A.

    2018-03-01

    The Magnetic Resonance Imaging (MRI) is a medical imaging technique that uses the interaction between the magnetic field and the nuclear spins. MRI can be used to show disparity of pathology by transversal relaxation time (T2) weighted images. Some techniques for producing T2-weighted images are Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction (PROPELLER) and Fluid Attenuated Inversion Recovery (FLAIR). A comparison of T2 PROPELLER and T2 FLAIR parameters in MRI image has been conducted. And improve Image Quality the image by using RadiAnt DICOM Viewer and ENVI software with method of image segmentation and Region of Interest (ROI). Brain images were randomly selected. The result of research showed that Time Repetition (TR) and Time Echo (TE) values in all types of images were not influenced by age. T2 FLAIR images had longer TR value (9000 ms), meanwhile T2 PROPELLER images had longer TE value (100.75 - 102.1 ms). Furthermore, areas with low and medium signal intensity appeared clearer by using T2 PROPELLER images (average coefficients of variation for low and medium signal intensity were 0.0431 and 0.0705, respectively). As for areas with high signal intensity appeared clearer by using T2 FLAIR images (average coefficient of variation was 0.0637).

  8. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw [Morgantown, VA; Umeno, Marc M [Woodinville, WA

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  9. Noise Gating Solar Images

    NASA Astrophysics Data System (ADS)

    DeForest, Craig; Seaton, Daniel B.; Darnell, John A.

    2017-08-01

    I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO

  10. Biometric image enhancement using decision rule based image fusion techniques

    NASA Astrophysics Data System (ADS)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  11. Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging

    NASA Technical Reports Server (NTRS)

    Kushner, Laura Kathryn; Schairer, Edward T.

    2011-01-01

    Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.

  12. Enhanced EDX images by fusion of multimodal SEM images using pansharpening techniques.

    PubMed

    Franchi, G; Angulo, J; Moreaud, M; Sorbier, L

    2018-01-01

    The goal of this paper is to explore the potential interest of image fusion in the context of multimodal scanning electron microscope (SEM) imaging. In particular, we aim at merging the backscattered electron images that usually have a high spatial resolution but do not provide enough discriminative information to physically classify the nature of the sample, with energy-dispersive X-ray spectroscopy (EDX) images that have discriminative information but a lower spatial resolution. The produced images are named enhanced EDX. To achieve this goal, we have compared the results obtained with classical pansharpening techniques for image fusion with an original approach tailored for multimodal SEM fusion of information. Quantitative assessment is obtained by means of two SEM images and a simulated dataset produced by a software based on PENELOPE. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  13. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  14. Improving high resolution retinal image quality using speckle illumination HiLo imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-08-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis.

  15. Concave omnidirectional imaging device for cylindrical object based on catadioptric panoramic imaging

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojun; Wu, Yumei; Wen, Peizhi

    2018-03-01

    To obtain information on the outer surface of a cylinder object, we propose a catadioptric panoramic imaging system based on the principle of uniform spatial resolution for vertical scenes. First, the influence of the projection-equation coefficients on the spatial resolution and astigmatism of the panoramic system are discussed, respectively. Through parameter optimization, we obtain the appropriate coefficients for the projection equation, and so the imaging quality of the entire imaging system can reach an optimum value. Finally, the system projection equation is calibrated, and an undistorted rectangular panoramic image is obtained using the cylindrical-surface projection expansion method. The proposed 360-deg panoramic-imaging device overcomes the shortcomings of existing surface panoramic-imaging methods, and it has the advantages of low cost, simple structure, high imaging quality, and small distortion, etc. The experimental results show the effectiveness of the proposed method.

  16. Gen-2 hand-held optical imager towards cancer imaging: reflectance and transillumination phantom studies.

    PubMed

    Gonzalez, Jean; Roman, Manuela; Hall, Michael; Godavarty, Anuradha

    2012-01-01

    Hand-held near-infrared (NIR) optical imagers are developed by various researchers towards non-invasive clinical breast imaging. Unlike these existing imagers that can perform only reflectance imaging, a generation-2 (Gen-2) hand-held optical imager has been recently developed to perform both reflectance and transillumination imaging. The unique forked design of the hand-held probe head(s) allows for reflectance imaging (as in ultrasound) and transillumination or compressed imaging (as in X-ray mammography). Phantom studies were performed to demonstrate two-dimensional (2D) target detection via reflectance and transillumination imaging at various target depths (1-5 cm deep) and using simultaneous multiple point illumination approach. It was observed that 0.45 cc targets were detected up to 5 cm deep during transillumination, but limited to 2.5 cm deep during reflectance imaging. Additionally, implementing appropriate data post-processing techniques along with a polynomial fitting approach, to plot 2D surface contours of the detected signal, yields distinct target detectability and localization. The ability of the gen-2 imager to perform both reflectance and transillumination imaging allows its direct comparison to ultrasound and X-ray mammography results, respectively, in future clinical breast imaging studies.

  17. Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2018-04-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.

  18. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2015-11-24

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  19. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2016-10-25

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  20. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2016-11-22

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  1. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2017-04-25

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  2. Fiber-optic fluorescence imaging

    PubMed Central

    Flusberg, Benjamin A; Cocker, Eric D; Piyawattanametha, Wibool; Jung, Juergen C; Cheung, Eunice L M; Schnitzer, Mark J

    2010-01-01

    Optical fibers guide light between separate locations and enable new types of fluorescence imaging. Fiber-optic fluorescence imaging systems include portable handheld microscopes, flexible endoscopes well suited for imaging within hollow tissue cavities and microendoscopes that allow minimally invasive high-resolution imaging deep within tissue. A challenge in the creation of such devices is the design and integration of miniaturized optical and mechanical components. Until recently, fiber-based fluorescence imaging was mainly limited to epifluorescence and scanning confocal modalities. Two new classes of photonic crystal fiber facilitate ultrashort pulse delivery for fiber-optic two-photon fluorescence imaging. An upcoming generation of fluorescence imaging devices will be based on microfabricated device components. PMID:16299479

  3. GOATS Image Projection Component

    NASA Technical Reports Server (NTRS)

    Haber, Benjamin M.; Green, Joseph J.

    2011-01-01

    When doing mission analysis and design of an imaging system in orbit around the Earth, answering the fundamental question of imaging performance requires an understanding of the image products that will be produced by the imaging system. GOATS software represents a series of MATLAB functions to provide for geometric image projections. Unique features of the software include function modularity, a standard MATLAB interface, easy-to-understand first-principles-based analysis, and the ability to perform geometric image projections of framing type imaging systems. The software modules are created for maximum analysis utility, and can all be used independently for many varied analysis tasks, or used in conjunction with other orbit analysis tools.

  4. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  5. Polarization Imaging Apparatus

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin K.; Chen, Qiushui

    2010-01-01

    A polarization imaging apparatus has shown promise as a prototype of instruments for medical imaging with contrast greater than that achievable by use of non-polarized light. The underlying principles of design and operation are derived from observations that light interacts with tissue ultrastructures that affect reflectance, scattering, absorption, and polarization of light. The apparatus utilizes high-speed electro-optical components for generating light properties and acquiring polarization images through aligned polarizers. These components include phase retarders made of OptoCeramic (registered TradeMark) material - a ceramic that has a high electro-optical coefficient. The apparatus includes a computer running a program that implements a novel algorithm for controlling the phase retarders, capturing image data, and computing the Stokes polarization images. Potential applications include imaging of superficial cancers and other skin lesions, early detection of diseased cells, and microscopic analysis of tissues. The high imaging speed of this apparatus could be beneficial for observing live cells or tissues, and could enable rapid identification of moving targets in astronomy and national defense. The apparatus could also be used as an analysis tool in material research and industrial processing.

  6. Spatially weighted mutual information image registration for image guided radiation therapy.

    PubMed

    Park, Samuel B; Rhee, Frank C; Monroe, James I; Sohn, Jason W

    2010-09-01

    To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically "important" areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/ MVCT image sets. The SWMI registration with

  7. Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimmel, R.; Malladi, R.; Sochen, N.

    A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as amore » surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.« less

  8. Astro-1 Image Taken by the Ultraviolet Imaging Telescope

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This is a presentation of two comparison images of the Spiral Galaxy M81 in the constellation URA Major. The galaxy is about 12-million light years from Earth. The left image is the Spiral Galaxy M81 as photographed by the Ultraviolet Imaging Telescope (UIT) during the Astro-1 Mission (STS-35) on December 9, 1990. This UIT photograph, made with ultraviolet light, reveals regions where new stars are forming at a rapid rate. The right image is a photograph of the same galaxy in red light made with a 36-inch (0.9-meter) telescope at the Kitt Peak National Observatory near Tucson, Arizona. The Astro Observatory was designed to explore the universe by observing and measuring ultraviolet radiation from celestial objects. Three instruments made up the Astro Observatory: The Hopkins Ultraviolet Telescope (HUT), the Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE). The Marshall Space Flight Center had management responsibilities for the Astro-1 mission. The Astro-1 Observatory was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.

  9. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging.

    PubMed

    Mori, S

    2014-05-01

    To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.

  10. Retrieving high-resolution images over the Internet from an anatomical image database

    NASA Astrophysics Data System (ADS)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  11. BIRAM: a content-based image retrieval framework for medical images

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; Furuie, Sergio S.

    2006-03-01

    In the medical field, digital images are becoming more and more important for diagnostics and therapy of the patients. At the same time, the development of new technologies has increased the amount of image data produced in a hospital. This creates a demand for access methods that offer more than text-based queries for retrieval of the information. In this paper is proposed a framework for the retrieval of medical images that allows the use of different algorithms for the search of medical images by similarity. The framework also enables the search for textual information from an associated medical report and DICOM header information. The proposed system can be used for support of clinical decision making and is intended to be integrated with an open source picture, archiving and communication systems (PACS). The BIRAM has the following advantages: (i) Can receive several types of algorithms for image similarity search; (ii) Allows the codification of the report according to a medical dictionary, improving the indexing of the information and retrieval; (iii) The algorithms can be selectively applied to images with the appropriated characteristics, for instance, only in magnetic resonance images. The framework was implemented in Java language using a MS Access 97 database. The proposed framework can still be improved, by the use of regions of interest (ROI), indexing with slim-trees and integration with a PACS Server.

  12. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    PubMed

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared

  13. Noninvasive cardiovascular imaging.

    PubMed

    Hartman, Robert J

    2014-01-01

    Over the past 2 decades, use of noninvasive cardiovascular imaging has increased dramatically. This article provides a brief synopsis of the current state of several technologies-- echocardiography, cardiac magnetic resonance imaging, and cardiac computed tomography--as well as a glimpse at future possibilities in cardiac imaging.

  14. Human placental vasculature imaging using an LED-based photoacoustic/ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Maneas, Efthymios; Xia, Wenfeng; Kuniyil Ajith Singh, Mithun; Sato, Naoto; Agano, Toshitaka; Ourselin, Sebastien; West, Simeon J.; David, Anna L.; Vercauteren, Tom; Desjardins, Adrien E.

    2018-02-01

    Minimally invasive fetal interventions, such as those used for therapy of twin-to-twin transfusion syndrome (TTTS), require accurate image guidance to optimise patient outcomes. Currently, TTTS can be treated fetoscopically by identifying anastomosing vessels on the chorionic (fetal) placental surface, and then performing photocoagulation. Incomplete photocoagulation increases the risk of procedure failure. Photoacoustic imaging can provide contrast for both haemoglobin concentration and oxygenation, and in this study, it was hypothesised that it can resolve chorionic placental vessels. We imaged a term human placenta that was collected after caesarean section delivery using a photoacoustic/ultrasound system (AcousticX) that included light emitting diode (LED) arrays for excitation light and a linear-array ultrasound imaging probe. Two-dimensional (2D) co-registered photoacoustic and B-mode pulse-echo ultrasound images were acquired and displayed in real-time. Translation of the imaging probe enabled 3D imaging. This feasibility study demonstrated that photoacoustic imaging can be used to visualise chorionic placental vasculature, and that it has strong potential to guide minimally invasive fetal interventions.

  15. Accuracy comparison in mapping water bodies using Landsat images and Google Earth Images

    NASA Astrophysics Data System (ADS)

    Zhou, Z.; Zhou, X.

    2016-12-01

    A lot of research has been done for the extraction of water bodies with multiple satellite images. The Water Indexes with the use of multi-spectral images are the mostly used methods for the water bodies' extraction. In order to extract area of water bodies from satellite images, accuracy may depend on the spatial resolution of images and relative size of the water bodies. To quantify the impact of spatial resolution and size (major and minor lengths) of the water bodies on the accuracy of water area extraction, we use Georgetown Lake, Montana and coalbed methane (CBM) water retention ponds in the Montana Powder River Basin as test sites to evaluate the impact of spatial resolution and the size of water bodies on water area extraction. Data sources used include Landsat images and Google Earth images covering both large water bodies and small ponds. Firstly we used water indices to extract water coverage from Landsat images for both large lake and small ponds. Secondly we used a newly developed visible-index method to extract water coverage from Google Earth images covering both large lake and small ponds. Thirdly, we used the image fusion method in which the Google Earth Images are fused with multi-spectral Landsat images to obtain multi-spectral images of the same high spatial resolution as the Google earth images. The actual area of the lake and ponds are measured using GPS surveys. Results will be compared and the optimal method will be selected for water body extraction.

  16. Cardiac Imaging System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Although not available to all patients with narrowed arteries, balloon angioplasty has expanded dramatically since its introduction with an estimated further growth to 562,000 procedures in the U.S. alone by 1992. Growth has fueled demand for higher quality imaging systems that allow the cardiologist to be more accurate and increase the chances of a successful procedure. A major advance is the Digital Cardiac Imaging (DCI) System designed by Philips Medical Systems International, Best, The Netherlands and marketed in the U.S. by Philips Medical Systems North America Company. The key benefit is significantly improved real-time imaging and the ability to employ image enhancement techniques to bring out added details. Using a cordless control unit, the cardiologist can manipulate images to make immediate assessment, compare live x-ray and roadmap images by placing them side-by-side on monitor screens, or compare pre-procedure and post procedure conditions. The Philips DCI improves the cardiologist's precision by expanding the information available to him.

  17. Photocapacitive image converter

    NASA Technical Reports Server (NTRS)

    Miller, W. E.; Sher, A.; Tsuo, Y. H. (Inventor)

    1982-01-01

    An apparatus for converting a radiant energy image into corresponding electrical signals including an image converter is described. The image converter includes a substrate of semiconductor material, an insulating layer on the front surface of the substrate, and an electrical contact on the back surface of the substrate. A first series of parallel transparent conductive stripes is on the insulating layer with a processing circuit connected to each of the conductive stripes for detecting the modulated voltages generated thereon. In a first embodiment of the invention, a modulated light stripe perpendicular to the conductive stripes scans the image converter. In a second embodiment a second insulating layer is deposited over the conductive stripes and a second series of parallel transparent conductive stripes perpendicular to the first series is on the second insulating layer. A different frequency current signal is applied to each of the second series of conductive stripes and a modulated image is applied to the image converter.

  18. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  19. Diagnostic Imaging

    MedlinePlus

    Diagnostic imaging lets doctors look inside your body for clues about a medical condition. A variety of machines and ... and activities inside your body. The type of imaging your doctor uses depends on your symptoms and ...

  20. Predictive images of postoperative levator resection outcome using image processing software.

    PubMed

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  1. Magneto-optical imaging technique for hostile environments: The ghost imaging approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meda, A.; Caprile, A.; Avella, A.

    2015-06-29

    In this paper, we develop an approach to magneto optical imaging (MOI), applying a ghost imaging (GI) protocol to perform Faraday microscopy. MOI is of the utmost importance for the investigation of magnetic properties of material samples, through Weiss domains shape, dimension and dynamics analysis. Nevertheless, in some extreme conditions such as cryogenic temperatures or high magnetic field applications, there exists a lack of domain images due to the difficulty in creating an efficient imaging system in such environments. Here, we present an innovative MOI technique that separates the imaging optical path from the one illuminating the object. The techniquemore » is based on thermal light GI and exploits correlations between light beams to retrieve the image of magnetic domains. As a proof of principle, the proposed technique is applied to the Faraday magneto-optical observation of the remanence domain structure of an yttrium iron garnet sample.« less

  2. Cloud-based image sharing network for collaborative imaging diagnosis and consultation

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo

    2018-03-01

    In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.

  3. Comparison of ultrasound B-mode, strain imaging, acoustic radiation force impulse displacement and shear wave velocity imaging using real time clinical breast images

    NASA Astrophysics Data System (ADS)

    Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam

    2016-04-01

    It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.

  4. Supervoxels for graph cuts-based deformable image registration using guided image filtering

    NASA Astrophysics Data System (ADS)

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-11-01

    We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.

  5. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  6. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.

    PubMed

    Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A

    2017-10-04

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.

  7. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  8. Adding and Deleting Images

    EPA Pesticide Factsheets

    Images are added via the Drupal WebCMS Editor. Once an image is uploaded onto a page, it is available via the Library and your files. You can edit the metadata, delete the image permanently, and/or replace images on the Files tab.

  9. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  10. Dual light field and polarization imaging using CMOS diffractive image sensors.

    PubMed

    Jayasuriya, Suren; Sivaramakrishnan, Sriram; Chuang, Ellen; Guruaribam, Debashree; Wang, Albert; Molnar, Alyosha

    2015-05-15

    In this Letter we present, to the best of our knowledge, the first integrated CMOS image sensor that can simultaneously perform light field and polarization imaging without the use of external filters or additional optical elements. Previous work has shown how photodetectors with two stacks of integrated metal gratings above them (called angle sensitive pixels) diffract light in a Talbot pattern to capture four-dimensional light fields. We show, in addition to diffractive imaging, that these gratings polarize incoming light and characterize the response of these sensors to polarization and incidence angle. Finally, we show two applications of polarization imaging: imaging stress-induced birefringence and identifying specular reflections in scenes to improve light field algorithms for these scenes.

  11. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  12. Hyperspectral imaging flow cytometer

    DOEpatents

    Sinclair, Michael B.; Jones, Howland D. T.

    2017-10-25

    A hyperspectral imaging flow cytometer can acquire high-resolution hyperspectral images of particles, such as biological cells, flowing through a microfluidic system. The hyperspectral imaging flow cytometer can provide detailed spatial maps of multiple emitting species, cell morphology information, and state of health. An optimized system can image about 20 cells per second. The hyperspectral imaging flow cytometer enables many thousands of cells to be characterized in a single session.

  13. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  14. Ultrasonic Imaging System

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C. (Inventor); Moerk, Steven (Inventor)

    1999-01-01

    An imaging system is described which can be used to either passively search for sources of ultrasonics or as an active phase imaging system. which can image fires. gas leaks, or air temperature gradients. This system uses an array of ultrasonic receivers coupled to an ultrasound collector or lens to provide an electronic image of the ultrasound intensity in a selected angular region of space. A system is described which includes a video camera to provide a visual reference to a region being examined for ultrasonic signals.

  15. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  16. Cryo-Imaging and Software Platform for Analysis of Molecular MR Imaging of Micrometastases

    PubMed Central

    Qutaish, Mohammed Q.; Zhou, Zhuxian; Prabhu, David; Liu, Yiqiao; Busso, Mallory R.; Izadnegahdar, Donna; Gargesha, Madhusudhana; Lu, Hong; Lu, Zheng-Rong

    2018-01-01

    We created and evaluated a preclinical, multimodality imaging, and software platform to assess molecular imaging of small metastases. This included experimental methods (e.g., GFP-labeled tumor and high resolution multispectral cryo-imaging), nonrigid image registration, and interactive visualization of imaging agent targeting. We describe technological details earlier applied to GFP-labeled metastatic tumor targeting by molecular MR (CREKA-Gd) and red fluorescent (CREKA-Cy5) imaging agents. Optimized nonrigid cryo-MRI registration enabled nonambiguous association of MR signals to GFP tumors. Interactive visualization of out-of-RAM volumetric image data allowed one to zoom to a GFP-labeled micrometastasis, determine its anatomical location from color cryo-images, and establish the presence/absence of targeted CREKA-Gd and CREKA-Cy5. In a mouse with >160 GFP-labeled tumors, we determined that in the MR images every tumor in the lung >0.3 mm2 had visible signal and that some metastases as small as 0.1 mm2 were also visible. More tumors were visible in CREKA-Cy5 than in CREKA-Gd MRI. Tape transfer method and nonrigid registration allowed accurate (<11 μm error) registration of whole mouse histology to corresponding cryo-images. Histology showed inflammation and necrotic regions not labeled by imaging agents. This mouse-to-cells multiscale and multimodality platform should uniquely enable more informative and accurate studies of metastatic cancer imaging and therapy. PMID:29805438

  17. Combined use of iterative reconstruction and monochromatic imaging in spinal fusion CT images.

    PubMed

    Wang, Fengdan; Zhang, Yan; Xue, Huadan; Han, Wei; Yang, Xianda; Jin, Zhengyu; Zwar, Richard

    2017-01-01

    Spinal fusion surgery is an important procedure for treating spinal diseases and computed tomography (CT) is a critical tool for postoperative evaluation. However, CT image quality is considerably impaired by metal artifacts and image noise. To explore whether metal artifacts and image noise can be reduced by combining two technologies, adaptive statistical iterative reconstruction (ASIR) and monochromatic imaging generated by gemstone spectral imaging (GSI) dual-energy CT. A total of 51 patients with 318 spinal pedicle screws were prospectively scanned by dual-energy CT using fast kV-switching GSI between 80 and 140 kVp. Monochromatic GSI images at 110 keV were reconstructed either without or with various levels of ASIR (30%, 50%, 70%, and 100%). The quality of five sets of images was objectively and subjectively assessed. With objective image quality assessment, metal artifacts decreased when increasing levels of ASIR were applied (P < 0.001). Moreover, adding ASIR to GSI also decreased image noise (P < 0.001) and improved the signal-to-noise ratio (P < 0.001). The subjective image quality analysis showed good inter-reader concordance, with intra-class correlation coefficients between 0.89 and 0.99. The visualization of peri-implant soft tissue was improved at higher ASIR levels (P < 0.001). Combined use of ASIR and GSI decreased image noise and improved image quality in post-spinal fusion CT scans. Optimal results were achieved with ASIR levels ≥70%. © The Foundation Acta Radiologica 2016.

  18. MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences

    PubMed Central

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.

    2016-01-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193

  19. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    PubMed

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  20. Imaging of sialadenitis

    PubMed Central

    Mukherji, Suresh

    2017-01-01

    Sialadenitis is an inflammation or infection of the salivary glands that may affect the parotid, submandibular and small salivary glands. Imaging findings vary among unilateral or bilateral salivary gland enlargement, atrophy, abscess, ductal dilation, cysts, stones and calcification. Imaging can detect abscess in acute bacterial suppurative sialadenitis, ductal changes with cysts in chronic adult and juvenile recurrent parotitis. Imaging is sensitive for detection of salivary stones and stricture in obstructive sialadenitis. Immunoglobulin G4-sialadenitis appears as bilateral submandibular gland enlargement. Imaging is helpful in staging and surveillance of patients with Sjögren’s syndrome. Correlation of imaging findings with clinical presentation can aid diagnosis of granulomatous sialadenitis. Post-treatment sialadenitis can occur after radiotherapy, radioactive iodine or surgery. PMID:28059621

  1. Laboratory test of a polarimetry imaging subtraction system for the high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Li, Rong

    2012-09-01

    We propose a polarimetry imaging subtraction test system that can be used for the direct imaging of the reflected light from exoplanets. Such a system will be able to remove the speckle noise scattered by the wave-front error and thus can enhance the high-contrast imaging. In this system, we use a Wollaston Prism (WP) to divide the incoming light into two simultaneous images with perpendicular linear polarizations. One of the images is used as the reference image. Then both the phase and geometric distortion corrections have been performed on the other image. The corrected image is subtracted with the reference image to remove the speckles. The whole procedure is based on an optimization algorithm and the target function is to minimize the residual speckles after subtraction. For demonstration purpose, here we only use a circular pupil in the test without integrating of our apodized-pupil coronagraph. It is shown that best result can be gained by inducing both phase and distortion corrections. Finally, it has reached an extra contrast gain of 50-times improvement in average, which is promising to be used for the direct imaging of exoplanets.

  2. New image-stabilizing system

    NASA Astrophysics Data System (ADS)

    Zhao, Yuejin

    1996-06-01

    In this paper, a new method for image stabilization with a three-axis image- stabilizing reflecting prism assembly is presented, and the principle of image stabilization in this prism assembly, formulae for image stabilization and working formulae with an approximation up to the third power are given in detail. In this image-stabilizing system, a single chip microcomputer is used to calculate value of compensating angles and thus to control the prism assembly. Two gyroscopes act as sensors from which information of angular perturbation is obtained, three stepping motors drive the prism assembly to compensate for the movement of image produced by angular perturbation. The image-stabilizing device so established is a multifold system which involves optics, mechanics, electronics and computer.

  3. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  4. SU-E-J-109: Accurate Contour Transfer Between Different Image Modalities Using a Hybrid Deformable Image Registration and Fuzzy Connected Image Segmentation Method.

    PubMed

    Yang, C; Paulson, E; Li, X

    2012-06-01

    To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment

  5. Multi-focus image fusion using a guided-filter-based difference image.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  6. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  7. Lensless high-resolution photoacoustic imaging scanner for in vivo skin imaging

    NASA Astrophysics Data System (ADS)

    Ida, Taiichiro; Iwazaki, Hideaki; Omuro, Toshiyuki; Kawaguchi, Yasushi; Tsunoi, Yasuyuki; Kawauchi, Satoko; Sato, Shunichi

    2018-02-01

    We previously launched a high-resolution photoacoustic (PA) imaging scanner based on a unique lensless design for in vivo skin imaging. The design, imaging algorithm and characteristics of the system are described in this paper. Neither an optical lens nor an acoustic lens is used in the system. In the imaging head, four sensor elements are arranged quadrilaterally, and by checking the phase differences for PA waves detected with these four sensors, a set of PA signals only originating from a chromophore located on the sensor center axis is extracted for constructing an image. A phantom study using a carbon fiber showed a depth-independent horizontal resolution of 84.0 ± 3.5 µm, and the scan direction-dependent variation of PA signals was about ± 20%. We then performed imaging of vasculature phantoms: patterns of red ink lines with widths of 100 or 200 μm formed in an acrylic block co-polymer. The patterns were visualized with high contrast, showing the capability for imaging arterioles and venues in the skin. Vasculatures in rat burn models and healthy human skin were also clearly visualized in vivo.

  8. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  9. Employing image processing techniques for cancer detection using microarray images.

    PubMed

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.

    PubMed

    Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei

    2017-09-22

    The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  11. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  12. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  13. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  14. Enhancement of PET Images

    NASA Astrophysics Data System (ADS)

    Davis, Paul B.; Abidi, Mongi A.

    1989-05-01

    PET is the only imaging modality that provides doctors with early analytic and quantitative biochemical assessment and precise localization of pathology. In PET images, boundary information as well as local pixel intensity are both crucial for manual and/or automated feature tracing, extraction, and identification. Unfortunately, the present PET technology does not provide the necessary image quality from which such precise analytic and quantitative measurements can be made. PET images suffer from significantly high levels of radial noise present in the form of streaks caused by the inexactness of the models used in image reconstruction. In this paper, our objective is to model PET noise and remove it without altering dominant features in the image. The ultimate goal here is to enhance these dominant features to allow for automatic computer interpretation and classification of PET images by developing techniques that take into consideration PET signal characteristics, data collection, and data reconstruction. We have modeled the noise steaks in PET images in both rectangular and polar representations and have shown both analytically and through computer simulation that it exhibits consistent mapping patterns. A class of filters was designed and applied successfully. Visual inspection of the filtered images show clear enhancement over the original images.

  15. Live-cell imaging.

    PubMed

    Cole, Richard

    2014-01-01

    It would be hard to argue that live-cell imaging has not changed our view of biology. The past 10 years have seen an explosion of interest in imaging cellular processes, down to the molecular level. There are now many advanced techniques being applied to live cell imaging. However, cellular health is often under appreciated. For many researchers, if the cell at the end of the experiment has not gone into apoptosis or is blebbed beyond recognition, than all is well. This is simply incorrect. There are many factors that need to be considered when performing live-cell imaging in order to maintain cellular health such as: imaging modality, media, temperature, humidity, PH, osmolality, and photon dose. The wavelength of illuminating light, and the total photon dose that the cells are exposed to, comprise two of the most important and controllable parameters of live-cell imaging. The lowest photon dose that achieves a measureable metric for the experimental question should be used, not the dose that produces cover photo quality images. This is paramount to ensure that the cellular processes being investigated are in their in vitro state and not shifted to an alternate pathway due to environmental stress. The timing of the mitosis is an ideal canary in the gold mine, in that any stress induced from the imaging will result in the increased length of mitosis, thus providing a control model for the current imagining conditions.

  16. Live-cell imaging

    PubMed Central

    Cole, Richard

    2014-01-01

    It would be hard to argue that live-cell imaging has not changed our view of biology. The past 10 years have seen an explosion of interest in imaging cellular processes, down to the molecular level. There are now many advanced techniques being applied to live cell imaging. However, cellular health is often under appreciated. For many researchers, if the cell at the end of the experiment has not gone into apoptosis or is blebbed beyond recognition, than all is well. This is simply incorrect. There are many factors that need to be considered when performing live-cell imaging in order to maintain cellular health such as: imaging modality, media, temperature, humidity, PH, osmolality, and photon dose. The wavelength of illuminating light, and the total photon dose that the cells are exposed to, comprise two of the most important and controllable parameters of live-cell imaging. The lowest photon dose that achieves a measureable metric for the experimental question should be used, not the dose that produces cover photo quality images. This is paramount to ensure that the cellular processes being investigated are in their in vitro state and not shifted to an alternate pathway due to environmental stress. The timing of the mitosis is an ideal canary in the gold mine, in that any stress induced from the imaging will result in the increased length of mitosis, thus providing a control model for the current imagining conditions. PMID:25482523

  17. Evaluation of the visual performance of image processing pipes: information value of subjective image attributes

    NASA Astrophysics Data System (ADS)

    Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.

    2010-01-01

    Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.

  18. Accurate determination of imaging modality using an ensemble of text- and image-based classifiers.

    PubMed

    Kahn, Charles E; Kalpathy-Cramer, Jayashree; Lam, Cesar A; Eldredge, Christina E

    2012-02-01

    Imaging modality can aid retrieval of medical images for clinical practice, research, and education. We evaluated whether an ensemble classifier could outperform its constituent individual classifiers in determining the modality of figures from radiology journals. Seventeen automated classifiers analyzed 77,495 images from two radiology journals. Each classifier assigned one of eight imaging modalities--computed tomography, graphic, magnetic resonance imaging, nuclear medicine, positron emission tomography, photograph, ultrasound, or radiograph-to each image based on visual and/or textual information. Three physicians determined the modality of 5,000 randomly selected images as a reference standard. A "Simple Vote" ensemble classifier assigned each image to the modality that received the greatest number of individual classifiers' votes. A "Weighted Vote" classifier weighted each individual classifier's vote based on performance over a training set. For each image, this classifier's output was the imaging modality that received the greatest weighted vote score. We measured precision, recall, and F score (the harmonic mean of precision and recall) for each classifier. Individual classifiers' F scores ranged from 0.184 to 0.892. The simple vote and weighted vote classifiers correctly assigned 4,565 images (F score, 0.913; 95% confidence interval, 0.905-0.921) and 4,672 images (F score, 0.934; 95% confidence interval, 0.927-0.941), respectively. The weighted vote classifier performed significantly better than all individual classifiers. An ensemble classifier correctly determined the imaging modality of 93% of figures in our sample. The imaging modality of figures published in radiology journals can be determined with high accuracy, which will improve systems for image retrieval.

  19. Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.

    PubMed

    Barter, James D; Thompson, Harold R; Richardson, Christine L

    2003-03-20

    A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.

  20. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  1. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  2. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  3. Diffusion-weighted imaging in pediatric body MR imaging: principles, technique, and emerging applications.

    PubMed

    Chavhan, Govind B; Alsabban, Zehour; Babyn, Paul S

    2014-01-01

    Diffusion-weighted (DW) imaging is an emerging technique in body imaging that provides indirect information about the microenvironment of tissues and lesions and helps detect, characterize, and follow up abnormalities. Two main challenges in the application of DW imaging to body imaging are the decreased signal-to-noise ratio of body tissues compared with neuronal tissues due to their shorter T2 relaxation time, and image degradation related to physiologic motion (eg, respiratory motion). Use of smaller b values and newer motion compensation techniques allow the evaluation of anatomic structures with DW imaging. DW imaging can be performed as a breath-hold sequence or a free-breathing sequence with or without respiratory triggering. Depending on the mobility of water molecules in their microenvironment, different normal tissues have different signals at DW imaging. Some normal tissues (eg, lymph nodes, spleen, ovarian and testicular parenchyma) are diffusion restricted, whereas others (eg, gallbladder, corpora cavernosa, endometrium, cartilage) show T2 shine-through. Epiphyses that contain fatty marrow and bone cortex appear dark on both DW images and apparent diffusion coefficient maps. Current and emerging applications of DW imaging in pediatric body imaging include tumor detection and characterization, assessment of therapy response and monitoring of tumors, noninvasive detection and grading of liver fibrosis and cirrhosis, detection of abscesses, and evaluation of inflammatory bowel disease. RSNA, 2014

  4. Geometry planning and image registration in magnetic particle imaging using bimodal fiducial markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, F., E-mail: f.werner@uke.de; Hofmann, M.; Them, K.

    Purpose: Magnetic particle imaging (MPI) is a quantitative imaging modality that allows the distribution of superparamagnetic nanoparticles to be visualized. Compared to other imaging techniques like x-ray radiography, computed tomography (CT), and magnetic resonance imaging (MRI), MPI only provides a signal from the administered tracer, but no additional morphological information, which complicates geometry planning and the interpretation of MP images. The purpose of the authors’ study was to develop bimodal fiducial markers that can be visualized by MPI and MRI in order to create MP–MR fusion images. Methods: A certain arrangement of three bimodal fiducial markers was developed and usedmore » in a combined MRI/MPI phantom and also during in vivo experiments in order to investigate its suitability for geometry planning and image fusion. An algorithm for automated marker extraction in both MR and MP images and rigid registration was established. Results: The developed bimodal fiducial markers can be visualized by MRI and MPI and allow for geometry planning as well as automated registration and fusion of MR–MP images. Conclusions: To date, exact positioning of the object to be imaged within the field of view (FOV) and the assignment of reconstructed MPI signals to corresponding morphological regions has been difficult. The developed bimodal fiducial markers and the automated image registration algorithm help to overcome these difficulties.« less

  5. Influence of image registration on apparent diffusion coefficient images computed from free-breathing diffusion MR images of the abdomen.

    PubMed

    Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H M; Poot, Dirk H J; Niessen, Wiro J; Klein, Stefan

    2015-08-01

    To evaluate the influence of image registration on apparent diffusion coefficient (ADC) images obtained from abdominal free-breathing diffusion-weighted MR images (DW-MRIs). A comprehensive pipeline based on automatic three-dimensional nonrigid image registrations is developed to compensate for misalignments in DW-MRI datasets obtained from five healthy subjects scanned twice. Motion is corrected both within each image and between images in a time series. ADC distributions are compared with and without registration in two abdominal volumes of interest (VOIs). The effects of interpolations and Gaussian blurring as alternative strategies to reduce motion artifacts are also investigated. Among the four considered scenarios (no processing, interpolation, blurring and registration), registration yields the best alignment scores. Median ADCs vary according to the chosen scenario: for the considered datasets, ADCs obtained without processing are 30% higher than with registration. Registration improves voxelwise reproducibility at least by a factor of 2 and decreases uncertainty (Fréchet-Cramér-Rao lower bound). Registration provides similar improvements in reproducibility and uncertainty as acquiring four times more data. Patient motion during image acquisition leads to misaligned DW-MRIs and inaccurate ADCs, which can be addressed using automatic registration. © 2014 Wiley Periodicals, Inc.

  6. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  7. Comparison of image quality and radiation dose between split-filter dual-energy images and single-energy images in single-source abdominal CT.

    PubMed

    Euler, André; Obmann, Markus M; Szucs-Farkas, Zsolt; Mileto, Achille; Zaehringer, Caroline; Falkowski, Anna L; Winkel, David J; Marin, Daniele; Stieltjes, Bram; Krauss, Bernhard; Schindera, Sebastian T

    2018-02-19

    To compare image quality and radiation dose of abdominal split-filter dual-energy CT (SF-DECT) combined with monoenergetic imaging to single-energy CT (SECT) with automatic tube voltage selection (ATVS). Two-hundred single-source abdominal CT scans were performed as SECT with ATVS (n = 100) and SF-DECT (n = 100). SF-DECT scans were reconstructed and subdivided into composed images (SF-CI) and monoenergetic images at 55 keV (SF-MI). Objective and subjective image quality were compared among single-energy images (SEI), SF-CI and SF-MI. CNR and FOM were separately calculated for the liver (e.g. CNR liv ) and the portal vein (CNR pv ). Radiation dose was compared using size-specific dose estimate (SSDE). Results of the three groups were compared using non-parametric tests. Image noise of SF-CI was 18% lower compared to SEI and 48% lower compared to SF-MI (p < 0.001). Composed images yielded higher CNR liv over single-energy images (23.4 vs. 20.9; p < 0.001), whereas CNR pv was significantly lower (3.5 vs. 5.2; p < 0.001). Monoenergetic images overcame this inferiority in CNR pv and achieved similar results compared to single-energy images (5.1 vs. 5.2; p > 0.628). Subjective sharpness was equal between single-energy and monoenergetic images and diagnostic confidence was equal between single-energy and composed images. FOM liv was highest for SF-CI. FOM pv was equal for SEI and SF-MI (p = 0.78). SSDE was significant lower for SF-DECT compared to SECT (p < 0.022). The combined use of split-filter dual-energy CT images provides comparable objective and subjective image quality at lower radiation dose compared to single-energy CT with ATVS. • Split-filter dual-energy results in 18% lower noise compared to single-energy with ATVS. • Split-filter dual-energy results in 11% lower SSDE compared to single-energy with ATVS. • Spectral shaping of split-filter dual-energy leads to an increased dose-efficiency.

  8. Image tools for UNIX

    NASA Technical Reports Server (NTRS)

    Banks, David C.

    1994-01-01

    This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.

  9. SYMPOSIUM ON MULTIMODALITY CARDIOVASCULAR MOLECULAR IMAGING IMAGING TECHNOLOGY - PART 2

    PubMed Central

    de Kemp, Robert A.; Epstein, Frederick H.; Catana, Ciprian; Tsui, Benjamin M.W.; Ritman, Erik L.

    2013-01-01

    Rationale The ability to trace or identify specific molecules within a specific anatomic location provides insight into metabolic pathways, tissue components and tracing of solute transport mechanisms. With the increasing use of small animals for research such imaging must have sufficiently high spatial resolution to allow anatomic localization as well as sufficient specificity and sensitivity to provide an accurate description of the molecular distribution and concentration. Methods Imaging methods based on electromagnetic radiation, such as PET, SPECT, MRI and CT, are increasingly applicable due to recent advances in novel scanner hardware, image reconstruction software and availability of novel molecules which have enhanced sensitivity in these methodologies. Results Micro-PET has been advanced by development of detector arrays that provide higher resolution and positron emitting elements that allow new molecular tracers to be labeled. Micro-MRI has been improved in terms of spatial resolution and sensitivity by increased magnet field strength and development of special purpose coils and associated scan protocols. Of particular interest is the associated ability to image local mechanical function and solute transport processes which can be directly related to the molecular information. This is further strengthened by the synergistic integration of the PET with MRI. Micro-SPECT has been improved by use of coded aperture imaging approaches as well as image reconstruction algorithms which can better deal with the photon limited scan data. The limited spatial resolution can be partially overcome by integrating the SPECT with CT. Micro-CT by itself provides exquisite spatial resolution of anatomy, but recent developments of high spatial resolution photon counting and spectrally-sensitive imaging arrays, combined with x-ray optical devices, have promise for actual molecular identification by virtue of the chemical bond lengths of molecules, especially of bio

  10. The effect of image quality and forensic expertise in facial image comparisons.

    PubMed

    Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice

    2015-03-01

    Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness. © 2014 American Academy of Forensic Sciences.

  11. Image registration assessment in radiotherapy image guidance based on control chart monitoring.

    PubMed

    Xia, Wenyao; Breen, Stephen L

    2018-04-01

    Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.

  12. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Steve A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Chris J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Willkinson, Timothy S.

    2008-08-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  13. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Christopher J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Wilkinson, Timothy S.

    2010-06-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  14. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  15. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  16. Multimodal Diffuse Optical Imaging

    NASA Astrophysics Data System (ADS)

    Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.

    Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.

  17. Industrial X-Ray Imaging

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In 1990, Lewis Research Center jointly sponsored a conference with the U.S. Air Force Wright Laboratory focused on high speed imaging. This conference, and early funding by Lewis Research Center, helped to spur work by Silicon Mountain Design, Inc. to break the performance barriers of imaging speed, resolution, and sensitivity through innovative technology. Later, under a Small Business Innovation Research contract with the Jet Propulsion Laboratory, the company designed a real-time image enhancing camera that yields superb, high quality images in 1/30th of a second while limiting distortion. The result is a rapidly available, enhanced image showing significantly greater detail compared to image processing executed on digital computers. Current applications include radiographic and pathology-based medicine, industrial imaging, x-ray inspection devices, and automated semiconductor inspection equipment.

  18. Evaluating imaging quality between different ghost imaging systems based on the coherent-mode representation

    NASA Astrophysics Data System (ADS)

    Shen, Qian; Bai, Yanfeng; Shi, Xiaohui; Nan, Suqin; Qu, Lijie; Li, Hengxing; Fu, Xiquan

    2017-07-01

    The difference in imaging quality between different ghost imaging schemes is studied by using coherent-mode representation of partially coherent fields. It is shown that the difference mainly relies on the distribution changes of the decomposition coefficients of the object imaged when the light source is fixed. For a new-designed imaging scheme, we only need to give the distribution of the decomposition coefficients and compare them with that of the existing imaging system, thus one can predict imaging quality. By choosing several typical ghost imaging systems, we theoretically and experimentally verify our results.

  19. Adolescence and Body Image.

    ERIC Educational Resources Information Center

    Weinshenker, Naomi

    2002-01-01

    Discusses body image among adolescents, explaining that today's adolescents are more prone to body image distortions and dissatisfaction than ever and examining the historical context; how self-image develops; normative discontent; body image distortions; body dysmorphic disorder (BDD); vulnerability of boys (muscle dysmorphia); who is at risk;…

  20. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  1. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering

    PubMed Central

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-01-01

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433

  2. Automated choroid segmentation of three-dimensional SD-OCT images by incorporating EDI-OCT images.

    PubMed

    Chen, Qiang; Niu, Sijie; Fang, Wangyi; Shuai, Yuanlu; Fan, Wen; Yuan, Songtao; Liu, Qinghuai

    2018-05-01

    The measurement of choroidal volume is more related with eye diseases than choroidal thickness, because the choroidal volume can reflect the diseases comprehensively. The purpose is to automatically segment choroid for three-dimensional (3D) spectral domain optical coherence tomography (SD-OCT) images. We present a novel choroid segmentation strategy for SD-OCT images by incorporating the enhanced depth imaging OCT (EDI-OCT) images. The down boundary of the choroid, namely choroid-sclera junction (CSJ), is almost invisible in SD-OCT images, while visible in EDI-OCT images. During the SD-OCT imaging, the EDI-OCT images can be generated for the same eye. Thus, we present an EDI-OCT-driven choroid segmentation method for SD-OCT images, where the choroid segmentation results of the EDI-OCT images are used to estimate the average choroidal thickness and to improve the construction of the CSJ feature space of the SD-OCT images. We also present a whole registration method between EDI-OCT and SD-OCT images based on retinal thickness and Bruch's Membrane (BM) position. The CSJ surface is obtained with a 3D graph search in the CSJ feature space. Experimental results with 768 images (6 cubes, 128 B-scan images for each cube) from 2 healthy persons, 2 age-related macular degeneration (AMD) and 2 diabetic retinopathy (DR) patients, and 210 B-scan images from other 8 healthy persons and 21 patients demonstrate that our method can achieve high segmentation accuracy. The mean choroid volume difference and overlap ratio for 6 cubes between our proposed method and outlines drawn by experts were -1.96µm3 and 88.56%, respectively. Our method is effective for the 3D choroid segmentation of SD-OCT images because the segmentation accuracy and stability are compared with the manual segmentation. Copyright © 2017. Published by Elsevier B.V.

  3. Clinical image processing engine

    NASA Astrophysics Data System (ADS)

    Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald

    2009-02-01

    Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.

  4. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  5. Light-Field Imaging Toolkit

    NASA Astrophysics Data System (ADS)

    Bolan, Jeffrey; Hall, Elise; Clifford, Chris; Thurow, Brian

    The Light-Field Imaging Toolkit (LFIT) is a collection of MATLAB functions designed to facilitate the rapid processing of raw light field images captured by a plenoptic camera. An included graphical user interface streamlines the necessary post-processing steps associated with plenoptic images. The generation of perspective shifted views and computationally refocused images is supported, in both single image and animated formats. LFIT performs necessary calibration, interpolation, and structuring steps to enable future applications of this technology.

  6. Preoperative magnetic resonance imaging protocol for endoscopic cranial base image-guided surgery.

    PubMed

    Grindle, Christopher R; Curry, Joseph M; Kang, Melissa D; Evans, James J; Rosen, Marc R

    2011-01-01

    Despite the increasing utilization of image-guided surgery, no radiology protocols for obtaining magnetic resonance (MR) imaging of adequate quality are available in the current literature. At our institution, more than 300 endonasal cranial base procedures including pituitary, extended pituitary, and other anterior skullbase procedures have been performed in the past 3 years. To facilitate and optimize preoperative evaluation and assessment, there was a need to develop a magnetic resonance protocol. Retrospective Technical Assessment was performed. Through a collaborative effort between the otolaryngology, neurosurgery, and neuroradiology departments at our institution, a skull base MR image-guided (IGS) protocol was developed with several ends in mind. First, it was necessary to generate diagnostic images useful for the more frequently seen pathologies to improve work flow and limit the expense and inefficiency of case specific MR studies. Second, it was necessary to generate sequences useful for IGS, preferably using sequences that best highlight that lesion. Currently, at our institution, all MR images used for IGS are obtained using this protocol as part of preoperative planning. The protocol that has been developed allows for thin cut precontrast and postcontrast axial cuts that can be used to plan intraoperative image guidance. It also obtains a thin cut T2 axial series that can be compiled separately for intraoperative imaging, or may be fused with computed tomographic images for combined modality. The outlined protocol obtains image sequences effective for diagnostic and operative purposes for image-guided surgery using both T1 and T2 sequences. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Speckle imaging techniques of the turbulence degraded images

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Huang, Zongfu; Mao, Hongjun; Liang, Yonghui

    2018-03-01

    We propose a speckle imaging algorithm in which we use the improved form of spectral ratio to obtain the Fried parameter, we also use a filter to reduce the high frequency noise effects. Our algorithm makes an improvement in the quality of the reconstructed images. The performance is illustrated by computer simulations.

  8. Radiologist manpower considerations and Imaging 3.0: effort planning for value-based imaging.

    PubMed

    Norbash, Alexander; Bluth, Edward; Lee, Christoph I; Francavilla, Michael; Donner, Michael; Dutton, Sharon C; Heilbrun, Marta; McGinty, Geraldine

    2014-10-01

    Our specialty is seeking to establish the value of imaging in the longitudinal patient-care continuum. We recognize the need to assess the value of our contributions rather than concentrating primarily on generating revenue. This recent focus is a result of both increased cost-containment efforts and regulatory demands. Imaging 3.0 is a value-based perspective that intends to describe and facilitate value-based imaging. Imaging 3.0 includes a broad set of initiatives addressing the visibility of radiologists, and emphasizing quality and safety oversight by radiologists, which are new directions of focus for us. Imaging 3.0 also addresses subspecialty imaging and off-hours imaging, which are existing areas of practice that are emblematic of inconsistent service delivery across all hours. Looking to the future, Imaging 3.0 describes how imaging services could be integrated into the framework of accountable care organizations. Although all these efforts may be essential, they necessitate manpower expenditures, and these efforts are not directly covered by revenue. If we recognize the urgency of need in developing these concepts, we can justify the manpower and staffing expenditures each organization is willing to shoulder in reaching Imaging 3.0. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  9. Hyperspectral imaging for cancer surgical margin delineation: registration of hyperspectral and histological images

    NASA Astrophysics Data System (ADS)

    Lu, Guolan; Halig, Luma; Wang, Dongsheng; Chen, Zhuo G.; Fei, Baowei

    2014-03-01

    The determination of tumor margins during surgical resection remains a challenging task. A complete removal of malignant tissue and conservation of healthy tissue is important for the preservation of organ function, patient satisfaction, and quality of life. Visual inspection and palpation is not sufficient for discriminating between malignant and normal tissue types. Hyperspectral imaging (HSI) technology has the potential to noninvasively delineate surgical tumor margin and can be used as an intra-operative visual aid tool. Since histological images provide the ground truth of cancer margins, it is necessary to warp the cancer regions in ex vivo histological images back to in vivo hyperspectral images in order to validate the tumor margins detected by HSI and to optimize the imaging parameters. In this paper, principal component analysis (PCA) is utilized to extract the principle component bands of the HSI images, which is then used to register HSI images with the corresponding histological image. Affine registration is chosen to model the global transformation. A B-spline free form deformation (FFD) method is used to model the local non-rigid deformation. Registration experiment was performed on animal hyperspectral and histological images. Experimental results from animals demonstrated the feasibility of the hyperspectral imaging method for cancer margin detection.

  10. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  11. Multiplane wave imaging increases signal-to-noise ratio in ultrafast ultrasound imaging.

    PubMed

    Tiran, Elodie; Deffieux, Thomas; Correia, Mafalda; Maresca, David; Osmanski, Bruno-Felix; Sieu, Lim-Anna; Bergel, Antoine; Cohen, Ivan; Pernot, Mathieu; Tanter, Mickael

    2015-11-07

    Ultrafast imaging using plane or diverging waves has recently enabled new ultrasound imaging modes with improved sensitivity and very high frame rates. Some of these new imaging modalities include shear wave elastography, ultrafast Doppler, ultrafast contrast-enhanced imaging and functional ultrasound imaging. Even though ultrafast imaging already encounters clinical success, increasing even more its penetration depth and signal-to-noise ratio for dedicated applications would be valuable. Ultrafast imaging relies on the coherent compounding of backscattered echoes resulting from successive tilted plane waves emissions; this produces high-resolution ultrasound images with a trade-off between final frame rate, contrast and resolution. In this work, we introduce multiplane wave imaging, a new method that strongly improves ultrafast images signal-to-noise ratio by virtually increasing the emission signal amplitude without compromising the frame rate. This method relies on the successive transmissions of multiple plane waves with differently coded amplitudes and emission angles in a single transmit event. Data from each single plane wave of increased amplitude can then be obtained, by recombining the received data of successive events with the proper coefficients. The benefits of multiplane wave for B-mode, shear wave elastography and ultrafast Doppler imaging are experimentally demonstrated. Multiplane wave with 4 plane waves emissions yields a 5.8  ±  0.5 dB increase in signal-to-noise ratio and approximately 10 mm in penetration in a calibrated ultrasound phantom (0.7 d MHz(-1) cm(-1)). In shear wave elastography, the same multiplane wave configuration yields a 2.07  ±  0.05 fold reduction of the particle velocity standard deviation and a two-fold reduction of the shear wave velocity maps standard deviation. In functional ultrasound imaging, the mapping of cerebral blood volume results in a 3 to 6 dB increase of the contrast-to-noise ratio in deep

  12. A comparison study of image features between FFDM and film mammogram images

    PubMed Central

    Jing, Hao; Yang, Yongyi; Wernick, Miles N.; Yarusso, Laura M.; Nishikawa, Robert M.

    2012-01-01

    Purpose: This work is to provide a direct, quantitative comparison of image features measured by film and full-field digital mammography (FFDM). The purpose is to investigate whether there is any systematic difference between film and FFDM in terms of quantitative image features and their influence on the performance of a computer-aided diagnosis (CAD) system. Methods: The authors make use of a set of matched film-FFDM image pairs acquired from cadaver breast specimens with simulated microcalcifications consisting of bone and teeth fragments using both a GE digital mammography system and a screen-film system. To quantify the image features, the authors consider a set of 12 textural features of lesion regions and six image features of individual microcalcifications (MCs). The authors first conduct a direct comparison on these quantitative features extracted from film and FFDM images. The authors then study the performance of a CAD classifier for discriminating between MCs and false positives (FPs) when the classifier is trained on images of different types (film, FFDM, or both). Results: For all the features considered, the quantitative results show a high degree of correlation between features extracted from film and FFDM, with the correlation coefficients ranging from 0.7326 to 0.9602 for the different features. Based on a Fisher sign rank test, there was no significant difference observed between the features extracted from film and those from FFDM. For both MC detection and discrimination of FPs from MCs, FFDM had a slight but statistically significant advantage in performance; however, when the classifiers were trained on different types of images (acquired with FFDM or SFM) for discriminating MCs from FPs, there was little difference. Conclusions: The results indicate good agreement between film and FFDM in quantitative image features. While FFDM images provide better detection performance in MCs, FFDM and film images may be interchangeable for the purposes of

  13. Quantitative Pulmonary Imaging Using Computed Tomography and Magnetic Resonance Imaging

    PubMed Central

    Washko, George R.; Parraga, Grace; Coxson, Harvey O.

    2011-01-01

    Measurements of lung function, including spirometry and body plethesmography, are easy to perform and are the current clinical standard for assessing disease severity. However, these lung functional techniques do not adequately explain the observed variability in clinical manifestations of disease and offer little insight into the relationship of lung structure and function. Lung imaging and the image based assessment of lung disease has matured to the extent that it is common for clinical, epidemiologic, and genetic investigation to have a component dedicated to image analysis. There are several exciting imaging modalities currently being used for the non-invasive study of lung anatomy and function. In this review we will focus on two of them, x-ray computed tomography and magnetic resonance imaging. Following a brief introduction of each method we detail some of the most recent work being done to characterize smoking-related lung disease and the clinical applications of such knowledge. PMID:22142490

  14. High throughput dual-wavelength temperature distribution imaging via compressive imaging

    NASA Astrophysics Data System (ADS)

    Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie

    2018-03-01

    Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.

  15. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  16. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  17. In vivo molecular and genomic imaging: new challenges for imaging physics.

    PubMed

    Cherry, Simon R

    2004-02-07

    The emerging and rapidly growing field of molecular and genomic imaging is providing new opportunities to directly visualize the biology of living organisms. By combining our growing knowledge regarding the role of specific genes and proteins in human health and disease, with novel ways to target these entities in a manner that produces an externally detectable signal, it is becoming increasingly possible to visualize and quantify specific biological processes in a non-invasive manner. All the major imaging modalities are contributing to this new field, each with its unique mechanisms for generating contrast and trade-offs in spatial resolution, temporal resolution and sensitivity with respect to the biological process of interest. Much of the development in molecular imaging is currently being carried out in animal models of disease, but as the field matures and with the development of more individualized medicine and the molecular targeting of new therapeutics, clinical translation is inevitable and will likely forever change our approach to diagnostic imaging. This review provides an introduction to the field of molecular imaging for readers who are not experts in the biological sciences and discusses the opportunities to apply a broad range of imaging technologies to better understand the biology of human health and disease. It also provides a brief review of the imaging technology (particularly for x-ray, nuclear and optical imaging) that is being developed to support this new field.

  18. Students' ideas about prismatic images: teaching experiments for an image-based approach

    NASA Astrophysics Data System (ADS)

    Grusche, Sascha

    2017-05-01

    Prismatic refraction is a classic topic in science education. To investigate how undergraduate students think about prismatic dispersion, and to see how they change their thinking when observing dispersed images, five teaching experiments were done and analysed according to the Model of Educational Reconstruction. For projection through a prism, the students used a 'split image projection' conceptualisation. For the view through a prism, this conceptualisation was not fruitful. Based on the observed images, six of seven students changed to a 'diverted image projection' conceptualisation. From a comparison between students' and scientists' ideas, teaching implications are derived for an image-based approach.

  19. Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Bila, Z.; Reznicek, J.; Pavelka, K.

    2013-07-01

    This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.

  20. A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.

    PubMed

    Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan

    2017-12-01

    A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.