Sample records for multi-focus color image

  1. A multi-focus image fusion method via region mosaicking on Laplacian pyramids

    PubMed Central

    Kou, Liang; Zhang, Liguo; Sun, Jianguo; Han, Qilong; Jin, Zilong

    2018-01-01

    In this paper, a method named Region Mosaicking on Laplacian Pyramids (RMLP) is proposed to fuse multi-focus images that is captured by microscope. First, the Sum-Modified-Laplacian is applied to measure the focus of multi-focus images. Then the density-based region growing algorithm is utilized to segment the focused region mask of each image. Finally, the mask is decomposed into a mask pyramid to supervise region mosaicking on a Laplacian pyramid. The region level pyramid keeps more original information than the pixel level. The experiment results show that RMLP has best performance in quantitative comparison with other methods. In addition, RMLP is insensitive to noise and can reduces the color distortion of the fused images on two datasets. PMID:29771912

  2. High-intensity focused ultrasound ablation assisted using color Doppler imaging for the treatment of hepatocellular carcinomas.

    PubMed

    Fukuda, Hiroyuki; Numata, Kazushi; Nozaki, Akito; Kondo, Masaaki; Morimoto, Manabu; Maeda, Shin; Tanaka, Katsuaki; Ohto, Masao; Ito, Ryu; Ishibashi, Yoshiharu; Oshima, Noriyoshi; Ito, Ayao; Zhu, Hui; Wang, Zhi-Biao

    2013-12-01

    We evaluated the usefulness of color Doppler flow imaging to compensate for the inadequate resolution of the ultrasound (US) monitoring during high-intensity focused ultrasound (HIFU) for the treatment of hepatocellular carcinoma (HCC). US-guided HIFU ablation assisted using color Doppler flow imaging was performed in 11 patients with small HCC (<3 lesions, <3 cm in diameter). The HIFU system (Chongqing Haifu Tech) was used under US guidance. Color Doppler sonographic studies were performed using an HIFU 6150S US imaging unit system and a 2.7-MHz electronic convex probe. The color Doppler images were used because of the influence of multi-reflections and the emergence of hyperecho. In 1 of the 11 patients, multi-reflections were responsible for the poor visualization of the tumor. In 10 cases, the tumor was poorly visualized because of the emergence of a hyperecho. In these cases, the ability to identify the original tumor location on the monitor by referencing the color Doppler images of the portal vein and the hepatic vein was very useful. HIFU treatments were successfully performed in all 11 patients with the assistance of color Doppler imaging. Color Doppler imaging is useful for the treatment of HCC using HIFU, compensating for the occasionally poor visualization provided by B-mode conventional US imaging.

  3. Fluorescence lidar multi-color imaging of vegetation

    NASA Technical Reports Server (NTRS)

    Johansson, J.; Wallinder, E.; Edner, H.; Svanberg, S.

    1992-01-01

    Multi-color imaging of vegetation fluorescence following laser excitation is reported for distances of 50 m. A mobile laser radar system equipped with a Nd:YAG laser transmitter and a 40 cm diameter telescope was used. Image processing allows extraction of information related to the physiological status of the vegetation and might prove useful in forest decline research.

  4. Color sensitivity of the multi-exposure HDR imaging process

    NASA Astrophysics Data System (ADS)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  5. Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging

    PubMed Central

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru

    2008-01-01

    Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788

  6. Extended depth of field integral imaging using multi-focus fusion

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  7. A Dual-Modality System for Both Multi-Color Ultrasound-Switchable Fluorescence and Ultrasound Imaging

    PubMed Central

    Kandukuri, Jayanth; Yu, Shuai; Cheng, Bingbing; Bandi, Venugopal; D’Souza, Francis; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong

    2017-01-01

    Simultaneous imaging of multiple targets (SIMT) in opaque biological tissues is an important goal for molecular imaging in the future. Multi-color fluorescence imaging in deep tissues is a promising technology to reach this goal. In this work, we developed a dual-modality imaging system by combining our recently developed ultrasound-switchable fluorescence (USF) imaging technology with the conventional ultrasound (US) B-mode imaging. This dual-modality system can simultaneously image tissue acoustic structure information and multi-color fluorophores in centimeter-deep tissue with comparable spatial resolutions. To conduct USF imaging on the same plane (i.e., x-z plane) as US imaging, we adopted two 90°-crossed ultrasound transducers with an overlapped focal region, while the US transducer (the third one) was positioned at the center of these two USF transducers. Thus, the axial resolution of USF is close to the lateral resolution, which allows a point-by-point USF scanning on the same plane as the US imaging. Both multi-color USF and ultrasound imaging of a tissue phantom were demonstrated. PMID:28165390

  8. Multi-color pyrometry imaging system and method of operating the same

    DOEpatents

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  9. Multi-focus image fusion using a guided-filter-based difference image.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  10. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  11. Optimizing Imaging Conditions for Demanding Multi-Color Super Resolution Localization Microscopy

    PubMed Central

    Nahidiazar, Leila; Agronskaia, Alexandra V.; Broertjes, Jorrit; van den Broek, Bram; Jalink, Kees

    2016-01-01

    Single Molecule Localization super-resolution Microscopy (SMLM) has become a powerful tool to study cellular architecture at the nanometer scale. In SMLM, single fluorophore labels are made to repeatedly switch on and off (“blink”), and their exact locations are determined by mathematically finding the centers of individual blinks. The image quality obtainable by SMLM critically depends on efficacy of blinking (brightness, fraction of molecules in the on-state) and on preparation longevity and labeling density. Recent work has identified several combinations of bright dyes and imaging buffers that work well together. Unfortunately, different dyes blink optimally in different imaging buffers, and acquisition of good quality 2- and 3-color images has therefore remained challenging. In this study we describe a new imaging buffer, OxEA, that supports 3-color imaging of the popular Alexa dyes. We also describe incremental improvements in preparation technique that significantly decrease lateral- and axial drift, as well as increase preparation longevity. We show that these improvements allow us to collect very large series of images from the same cell, enabling image stitching, extended 3D imaging as well as multi-color recording. PMID:27391487

  12. A comparative study of multi-focus image fusion validation metrics

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Messer, Neal; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Manno, Michael

    2016-05-01

    Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

  13. Multi-focus image fusion and robust encryption algorithm based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong

    2017-06-01

    Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.

  14. The Constancy of Colored After-Images

    PubMed Central

    Zeki, Semir; Cheadle, Samuel; Pepper, Joshua; Mylonas, Dimitris

    2017-01-01

    We undertook psychophysical experiments to determine whether the color of the after-image produced by viewing a colored patch which is part of a complex multi-colored scene depends on the wavelength-energy composition of the light reflected from that patch. Our results show that it does not. The after-image, just like the color itself, depends on the ratio of light of different wavebands reflected from it and its surrounds. Hence, traditional accounts of after-images as being the result of retinal adaptation or the perceptual result of physiological opponency, are inadequate. We propose instead that the color of after-images is generated after colors themselves are generated in the visual brain. PMID:28539878

  15. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  16. Wavelength- or Polarization-Selective Thermal Infrared Detectors for Multi-Color or Polarimetric Imaging Using Plasmonics and Metamaterials

    PubMed Central

    Ogawa, Shinpei; Kimata, Masafumi

    2017-01-01

    Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs—periodic crystals, metal-insulator-metal and mushroom-type PMAs—to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications. PMID:28772855

  17. Wavelength- or Polarization-Selective Thermal Infrared Detectors for Multi-Color or Polarimetric Imaging Using Plasmonics and Metamaterials.

    PubMed

    Ogawa, Shinpei; Kimata, Masafumi

    2017-05-04

    Wavelength- or polarization-selective thermal infrared (IR) detectors are promising for various novel applications such as fire detection, gas analysis, multi-color imaging, multi-channel detectors, recognition of artificial objects in a natural environment, and facial recognition. However, these functions require additional filters or polarizers, which leads to high cost and technical difficulties related to integration of many different pixels in an array format. Plasmonic metamaterial absorbers (PMAs) can impart wavelength or polarization selectivity to conventional thermal IR detectors simply by controlling the surface geometry of the absorbers to produce surface plasmon resonances at designed wavelengths or polarizations. This enables integration of many different pixels in an array format without any filters or polarizers. We review our recent advances in wavelength- and polarization-selective thermal IR sensors using PMAs for multi-color or polarimetric imaging. The absorption mechanism defined by the surface structures is discussed for three types of PMAs-periodic crystals, metal-insulator-metal and mushroom-type PMAs-to demonstrate appropriate applications. Our wavelength- or polarization-selective uncooled IR sensors using various PMAs and multi-color image sensors are then described. Finally, high-performance mushroom-type PMAs are investigated. These advanced functional thermal IR detectors with wavelength or polarization selectivity will provide great benefits for a wide range of applications.

  18. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    PubMed

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Implementation and image processing of a multi-focusing bionic compound eye

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Guo, Yongcai; Luo, Jiasai

    2018-01-01

    In this paper, a new BCE with multi-focusing microlens array (MLA) was proposed. The BCE consist of detachable micro-hole array (MHA), multi-focusing MLA and spherical substrate, thus allowing it to have a large FOV without crosstalk and stray light. The MHA was fabricated by the precision machining and the parameters of the microlens varied depend on the aperture of micro-hole, through which the implementation of the multi-focusing MLA was realized under the negative pressure. Without the pattern transfer and substrate reshaping, the whole fabrication method was capable of accomplishing within several minutes by using microinjection technology. Furthermore, the method is cost-effective and easy for operation, thus providing a feasible method for the mass production of the BCE. The corresponding image processing was used to realize the image stitching for the sub-image of each single microlens, which offering an integral image in large FOV. The image stitching was implemented through the overlap between the adjacent sub-images and the feature points between the adjacent sub-images were captured by the Harris point detection. By using the adaptive non-maximal suppression, numerous potential mismatching points were eliminated and the algorithm efficiency was proved effectively. Following this, the random sample consensus (RANSAC) was used for feature points matching, by which the relation of projection transformation of the image is obtained. The implementation of the accurate image matching was then realized after the smooth transition by weighted average method. Experimental results indicate that the image-stitching algorithm can be applied for the curved BCE in large field.

  20. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    NASA Astrophysics Data System (ADS)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  1. Region-based multi-step optic disk and cup segmentation from color fundus image

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan

    2013-02-01

    Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.

  2. Multi-focus image fusion with the all convolutional neural network

    NASA Astrophysics Data System (ADS)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  3. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  4. Adjustment of multi-CCD-chip-color-camera heads

    NASA Astrophysics Data System (ADS)

    Guyenot, Volker; Tittelbach, Guenther; Palme, Martin

    1999-09-01

    The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.

  5. Video-to-film color-image recorder.

    NASA Technical Reports Server (NTRS)

    Montuori, J. S.; Carnes, W. R.; Shim, I. H.

    1973-01-01

    A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.

  6. Two-color temporal focusing multiphoton excitation imaging with tunable-wavelength excitation

    NASA Astrophysics Data System (ADS)

    Lien, Chi-Hsiang; Abrigo, Gerald; Chen, Pei-Hsuan; Chien, Fan-Ching

    2017-02-01

    Wavelength tunable temporal focusing multiphoton excitation microscopy (TFMPEM) is conducted to visualize optical sectioning images of multiple fluorophore-labeled specimens through the optimal two-photon excitation (TPE) of each type of fluorophore. The tunable range of excitation wavelength was determined by the groove density of the grating, the diffraction angle, the focal length of lenses, and the shifting distance of the first lens in the beam expander. Based on a consideration of the trade-off between the tunable-wavelength range and axial resolution of temporal focusing multiphoton excitation imaging, the presented system demonstrated a tunable-wavelength range from 770 to 920 nm using a diffraction grating with groove density of 830 lines/mm. TPE fluorescence imaging examination of a fluorescent thin film indicated that the width of the axial confined excitation was 3.0±0.7 μm and the shifting distance of the temporal focal plane was less than 0.95 μm within the presented wavelength tunable range. Fast different wavelength excitation and three-dimensionally rendered imaging of Hela cell mitochondria and cytoskeletons and mouse muscle fibers were demonstrated. Significantly, the proposed system can improve the quality of two-color TFMPEM images through different excitation wavelengths to obtain higher-quality fluorescent signals in multiple-fluorophore measurements.

  7. A Plenoptic Multi-Color Imaging Pyrometer

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Hutchins, William D.; Fahringer, Timothy; Thurow, Brian S.

    2017-01-01

    A three-color pyrometer has been developed based on plenoptic imaging technology. Three bandpass filters placed in front of a camera lens allow separate 2D images to be obtained on a single image sensor at three different and adjustable wavelengths selected by the user. Images were obtained of different black- or grey-bodies including a calibration furnace, a radiation heater, and a luminous sulfur match flame. The images obtained of the calibration furnace and radiation heater were processed to determine 2D temperature distributions. Calibration results in the furnace showed that the instrument can measure temperature with an accuracy and precision of 10 Kelvins between 1100 and 1350 K. Time-resolved 2D temperature measurements of the radiation heater are shown.

  8. The research on multi-projection correction based on color coding grid array

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu

    2017-10-01

    There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.

  9. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  10. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  11. Perceived assessment metrics for visible and infrared color fused image quality without reference image

    NASA Astrophysics Data System (ADS)

    Yu, Xuelian; Chen, Qian; Gu, Guohua; Ren, Jianle; Sui, Xiubao

    2015-02-01

    Designing objective quality assessment of color-fused image is a very demanding and challenging task. We propose four no-reference metrics based on human visual system characteristics for objectively evaluating the quality of false color fusion image. The perceived edge metric (PEM) is defined based on visual perception model and color image gradient similarity between the fused image and the source images. The perceptual contrast metric (PCM) is established associating multi-scale contrast and varying contrast sensitivity filter (CSF) with color components. The linear combination of the standard deviation and mean value over the fused image construct the image colorfulness metric (ICM). The color comfort metric (CCM) is designed by the average saturation and the ratio of pixels with high and low saturation. The qualitative and quantitative experimental results demonstrate that the proposed metrics have a good agreement with subjective perception.

  12. Pixel Color Clustering of Multi-Temporally Acquired Digital Photographs of a Rice Canopy by Luminosity-Normalization and Pseudo-Red-Green-Blue Color Imaging

    PubMed Central

    Doi, Ryoichi; Arif, Chusnul

    2014-01-01

    Red-green-blue (RGB) channels of RGB digital photographs were loaded with luminosity-adjusted R, G, and completely white grayscale images, respectively (RGwhtB method), or R, G, and R + G (RGB yellow) grayscale images, respectively (RGrgbyB method), to adjust the brightness of the entire area of multi-temporally acquired color digital photographs of a rice canopy. From the RGwhtB or RGrgbyB pseudocolor image, cyan, magenta, CMYK yellow, black, L*, a*, and b* grayscale images were prepared. Using these grayscale images and R, G, and RGB yellow grayscale images, the luminosity-adjusted pixels of the canopy photographs were statistically clustered. With the RGrgbyB and the RGwhtB methods, seven and five major color clusters were given, respectively. The RGrgbyB method showed clear differences among three rice growth stages, and the vegetative stage was further divided into two substages. The RGwhtB method could not clearly discriminate between the second vegetative and midseason stages. The relative advantages of the RGrgbyB method were attributed to the R, G, B, magenta, yellow, L*, and a* grayscale images that contained richer information to show the colorimetrical differences among objects than those of the RGwhtB method. The comparison of rice canopy colors at different time points was enabled by the pseudocolor imaging method. PMID:25302325

  13. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  14. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  15. Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea

    NASA Astrophysics Data System (ADS)

    Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.

    2018-04-01

    Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.

  16. Multi-Focus Image Fusion Based on NSCT and NSST

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen

    2015-12-01

    In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.

  17. Multi-band transmission color filters for multi-color white LEDs based visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Qixia; Zhu, Zhendong; Gu, Huarong; Chen, Mengzhu; Tan, Qiaofeng

    2017-11-01

    Light-emitting diodes (LEDs) based visible light communication (VLC) can provide license-free bands, high data rates, and high security levels, which is a promising technique that will be extensively applied in future. Multi-band transmission color filters with enough peak transmittance and suitable bandwidth play a pivotal role for boosting signal-noise-ratio in VLC systems. In this paper, multi-band transmission color filters with bandwidth of dozens nanometers are designed by a simple analytical method. Experiment results of one-dimensional (1D) and two-dimensional (2D) tri-band color filters demonstrate the effectiveness of the multi-band transmission color filters and the corresponding analytical method.

  18. A mercury arc lamp-based multi-color confocal real time imaging system for cellular structure and function.

    PubMed

    Saito, Kenta; Kobayashi, Kentaro; Tani, Tomomi; Nagai, Takeharu

    2008-01-01

    Multi-point scanning confocal microscopy using a Nipkow disk enables the acquisition of fluorescent images with high spatial and temporal resolutions. Like other single-point scanning confocal systems that use Galvano meter mirrors, a commercially available Nipkow spinning disk confocal unit, Yokogawa CSU10, requires lasers as the excitation light source. The choice of fluorescent dyes is strongly restricted, however, because only a limited number of laser lines can be introduced into a single confocal system. To overcome this problem, we developed an illumination system in which light from a mercury arc lamp is scrambled to make homogeneous light by passing it through a multi-mode optical fiber. This illumination system provides incoherent light with continuous wavelengths, enabling the observation of a wide range of fluorophores. Using this optical system, we demonstrate both the high-speed imaging (up to 100 Hz) of intracellular Ca(2+) propagation, and the multi-color imaging of Ca(2+) and PKC-gamma dynamics in living cells.

  19. Internet Color Imaging

    DTIC Science & Technology

    2000-07-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1348 TITLE: Internet Color Imaging DISTRIBUTION: Approved for public...Paper Internet Color Imaging Hsien-Che Lee Imaging Science and Technology Laboratory Eastman Kodak Company, Rochester, New York 14650-1816 USA...ABSTRACT The sharing and exchange of color images over the Internet pose very challenging problems to color science and technology . Emerging color standards

  20. Interferometric source of multi-color, multi-beam entangled photons with mirror and mixer

    DOEpatents

    Dress, William B.; Kisner, Roger A.; Richards, Roger K.

    2004-06-01

    53 Systems and methods are described for an interferometric source of multi-color, multi-beam entangled photons. An apparatus includes: a multi-refringent device optically coupled to a source of coherent energy, the multi-refringent device providing a beam of multi-color entangled photons; a condenser device optically coupled to the multi-refringent device, the condenser device i) including a mirror and a mixer and ii) converging two spatially resolved portions of the beam of multi-color entangled photons into a converged multi-color entangled photon beam; a tunable phase adjuster optically coupled to the condenser device, the tunable phase adjuster changing a phase of at least a portion of the converged multi-color entangled photon beam to generate a first interferometeric multi-color entangled photon beam; and a beam splitter optically coupled to the condenser device, the beam splitter combining the first interferometeric multi-color entangled photon beam with a second interferometric multi-color entangled photon beam.

  1. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  2. Emerging From Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer

    NASA Astrophysics Data System (ADS)

    Li, Chongyi; Guo, Jichang; Guo, Chunle

    2018-03-01

    Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water. Such degradation not only affects the quality of underwater images but limits the ability of vision tasks. Different from existing methods which either ignore the wavelength dependency of the attenuation or assume a specific spectral profile, we tackle color distortion problem of underwater image from a new view. In this letter, we propose a weakly supervised color transfer method to correct color distortion, which relaxes the need of paired underwater images for training and allows for the underwater images unknown where were taken. Inspired by Cycle-Consistent Adversarial Networks, we design a multi-term loss function including adversarial loss, cycle consistency loss, and SSIM (Structural Similarity Index Measure) loss, which allows the content and structure of the corrected result the same as the input, but the color as if the image was taken without the water. Experiments on underwater images captured under diverse scenes show that our method produces visually pleasing results, even outperforms the art-of-the-state methods. Besides, our method can improve the performance of vision tasks.

  3. Effect of multi-wavelength irradiation on color characterization with light-emitting diodes (LEDs)

    NASA Astrophysics Data System (ADS)

    Park, Hyeong Ju; Song, Woosub; Lee, Byeong-Il; Kim, Hyejin; Kang, Hyun Wook

    2017-06-01

    In the current study, a multi-wavelength light-emitting diode (LED)-integrated CMOS imaging device was developed to investigate the effect of various wavelengths on multiple color characterization. Various color pigments (black, red, green, and blue) were applied on both white paper and skin phantom surfaces for quantitative analysis. The artificial skin phantoms were made of polydimethylsiloxane (PDMS) mixed with coffee and TiO2 powder to emulate the optical properties of the human dermis. The customized LED-integrated imaging device acquired images of the applied pigments by sequentially irradiating with the LED lights in the order of white, red, green, and blue. Each color pigment induced a lower contrast during illumination by the light with the equivalent color. However, the illumination by light with the complementary (opposite) color increased the signal-to-noise ratio by up to 11-fold due to the formation of a strong contrast ( i.e., red LED = 1.6 ± 0.3 vs. green LED = 19.0 ± 0.6 for red pigment). Detection of color pigments in conjunction with multi-wavelength LEDs can be a simple and reliable technique to estimate variations in the color pigments quantitatively.

  4. New feature of the neutron color image intensifier

    NASA Astrophysics Data System (ADS)

    Nittoh, Koichi; Konagai, Chikara; Noji, Takashi; Miyabe, Keisuke

    2009-06-01

    We developed prototype neutron color image intensifiers with high-sensitivity, wide dynamic range and long-life characteristics. In the prototype intensifier (Gd-Type 1), a terbium-activated Gd 2O 2S is used as the input-screen phosphor. In the upgraded model (Gd-Type 2), Gd 2O 3 and CsI:Na are vacuum deposited to form the phosphor layer, which improved the sensitivity and the spatial uniformity. A europium-activated Y 2O 2S multi-color scintillator, emitting red, green and blue photons with different intensities, is utilized as the output screen of the intensifier. By combining this image intensifier with a suitably tuned high-sensitive color CCD camera, higher sensitivity and wider dynamic range could be simultaneously attained than that of the conventional P20-phosphor-type image intensifier. The results of experiments at the JRR-3M neutron radiography irradiation port (flux: 1.5×10 8 n/cm 2/s) showed that these neutron color image intensifiers can clearly image dynamic phenomena with a 30 frame/s video picture. It is expected that the color image intensifier will be used as a new two-dimensional neutron sensor in new application fields.

  5. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  6. Impact of multi-focused images on recognition of soft biometric traits

    NASA Astrophysics Data System (ADS)

    Chiesa, V.; Dugelay, J. L.

    2016-09-01

    In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.

  7. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  8. A Simple Encryption Algorithm for Quantum Color Image

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Zhao, Ya

    2017-06-01

    In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.

  9. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  10. Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix

    NASA Astrophysics Data System (ADS)

    Lange, Holger

    2005-04-01

    Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.

  11. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    NASA Technical Reports Server (NTRS)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  12. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  13. Applications of Geostationary Ocean Color Imager (GOCI) observations

    NASA Astrophysics Data System (ADS)

    Park, Y. J.

    2016-02-01

    Ocean color remote-sensing technique opened a new era for biological oceanography by providing the global distribution of phytoplankton biomass every a few days. It has been proved useful for a variety of applications in coastal waters as well as oceanic waters. However, most ocean color sensors deliver less than one image per day for low and middle latitude areas, and this once a day image is insufficient to resolve transient or high frequency processes. Korean Geostationary Ocean Color Imager (GOCI), the first ever ocean color instrument operated on geostationary orbit, is collecting ocean color radiometry (OCR) data (multi-band radiances at the visible to NIR spectral wavelengths) since July, 2010. GOCI has an unprecedented capability to provide eight OCR images a day with a 500m resolution for the North East Asian seas Monitoring the spatial and temporal variability is important to understand many processes occurring in open ocean and coastal environments. With a series of images consecutively acquired by GOCI, we are now able to look into (sub-)diurnal variabilities of coastal ocean color products such as phytoplankton biomass, suspended particles concentrations, and primary production. The eight images taken a day provide another way to derive maps of ocean current velocity. Compared to polar orbiters, GOCI delivers more frequent images with constant viewing angle, which enables to better monitor and thus respond to coastal water issues such as harmful algal blooms, floating green and brown algae. The frequent observation capability for local area allows us to respond timely to natural disasters and hazards. GOCI images are often useful to identify sea fog, sea ice, wild fires, volcanic eruptions, transport of dust aerosols, snow covered area, etc.

  14. Multiphase computer-generated holograms for full-color image generation

    NASA Astrophysics Data System (ADS)

    Choi, Kyong S.; Choi, Byong S.; Choi, Yoon S.; Kim, Sun I.; Kim, Jong Man; Kim, Nam; Gil, Sang K.

    2002-06-01

    Multi-phase and binary-phase computer-generated holograms were designed and demonstrated for full-color image generation. Optimize a phase profile of the hologram that achieves each color image, we employed a simulated annealing method. The design binary phase hologram had the diffraction efficiency of 33.23 percent and the reconstruction error of 0.367 X 10-2. And eight phase hologram had the diffraction efficiency of 67.92 percent and the reconstruction error of 0.273 X 10-2. The designed BPH was fabricated by micro photolithographic technique with a minimum pixel width of 5micrometers . And the it was reconstructed using by two Ar-ion lasers and a He-Ne laser. In addition, the color dispersion characteristic of the fabricate grating and scaling problem of the reconstructed image were discussed.

  15. The Art of Multi-Image.

    ERIC Educational Resources Information Center

    Gordon, Roger L., Ed.

    This guide to multi-image program production for practitioners describes the process from the beginning stages through final presentation, examines historical perspectives, theory, and research in multi-image, and provides examples of successful utilization. Ten chapters focus on the following topics: (1) definition of multi-image field and…

  16. Introduction to Color Imaging Science

    NASA Astrophysics Data System (ADS)

    Lee, Hsien-Che

    2005-04-01

    Color imaging technology has become almost ubiquitous in modern life in the form of monitors, liquid crystal screens, color printers, scanners, and digital cameras. This book is a comprehensive guide to the scientific and engineering principles of color imaging. It covers the physics of light and color, how the eye and physical devices capture color images, how color is measured and calibrated, and how images are processed. It stresses physical principles and includes a wealth of real-world examples. The book will be of value to scientists and engineers in the color imaging industry and, with homework problems, can also be used as a text for graduate courses on color imaging.

  17. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  18. A coaxially focused multi-mode beam for optical coherence tomography imaging with extended depth of focus (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yin, Biwei; Liang, Chia-Pin; Vuong, Barry; Tearney, Guillermo J.

    2017-02-01

    Conventional OCT images, obtained using a focused Gaussian beam have a lateral resolution of approximately 30 μm and a depth of focus (DOF) of 2-3 mm, defined as the confocal parameter (twice of Gaussian beam Rayleigh range). Improvement of lateral resolution without sacrificing imaging range requires techniques that can extend the DOF. Previously, we described a self-imaging wavefront division optical system that provided an estimated one order of magnitude DOF extension. In this study, we further investigate the properties of the coaxially focused multi-mode (CAFM) beam created by this self-imaging wavefront division optical system and demonstrate its feasibility for real-time biological tissue imaging. Gaussian beam and CAFM beam fiber optic probes with similar numerical apertures (objective NA≈0.5) were fabricated, providing lateral resolutions of approximately 2 μm. Rigorous lateral resolution characterization over depth was performed for both probes. The CAFM beam probe was found to be able to provide a DOF that was approximately one order of magnitude greater than that of Gaussian beam probe. By incorporating the CAFM beam fiber optic probe into a μOCT system with 1.5 μm axial resolution, we were able to acquire cross-sectional images of swine small intestine ex vivo, enabling the visualization of subcellular structures, providing high quality OCT images over more than a 300 μm depth range.

  19. ASTER First Views of Rift Valley, Ethiopia - Thermal-Infrared TIR Image color

    NASA Image and Video Library

    2000-03-11

    This image is a color composite covering the Rift Valley inland area of Ethiopia (south of the region shown in PIA02452). The color difference of this image reflects the distribution of different rocks with different amounts of silicon dioxide. It is inferred that the area with whitish color is covered with basalt and the pinkish area in the center contain sandesite. This is the first spaceborne, multi-band TIR image in history that enables geologists to distinguish between rocks with similar compositions. The size of image: 60 km x 60 km approx., ground resolution 90 m x 90 m approximately. http://photojournal.jpl.nasa.gov/catalog/PIA02453

  20. An ultra-small, multi-point, and multi-color photo-detection system with high sensitivity and high dynamic range.

    PubMed

    Anazawa, Takashi; Yamazaki, Motohiro

    2017-12-05

    Although multi-point, multi-color fluorescence-detection systems are widely used in various sciences, they would find wider applications if they are miniaturized. Accordingly, an ultra-small, four-emission-point and four-color fluorescence-detection system was developed. Its size (space between emission points and a detection plane) is 15 × 10 × 12 mm, which is three-orders-of-magnitude smaller than that of a conventional system. Fluorescence from four emission points with an interval of 1 mm on the same plane was respectively collimated by four lenses and split into four color fluxes by four dichroic mirrors. Then, a total of sixteen parallel color fluxes were directly input into an image sensor and simultaneously detected. The emission-point plane and the detection plane (the image-sensor surface) were parallel and separated by a distance of only 12 mm. The developed system was applied to four-capillary array electrophoresis and successfully achieved Sanger DNA sequencing. Moreover, compared with a conventional system, the developed system had equivalent high fluorescence-detection sensitivity (lower detection limit of 17 pM dROX) and 1.6-orders-of-magnitude higher dynamic range (4.3 orders of magnitude).

  1. Multifocus microscopy with precise color multi-phase diffractive optics applied in functional neuronal imaging.

    PubMed

    Abrahamsson, Sara; Ilic, Rob; Wisniewski, Jan; Mehl, Brian; Yu, Liya; Chen, Lei; Davanco, Marcelo; Oudjedi, Laura; Fiche, Jean-Bernard; Hajj, Bassam; Jin, Xin; Pulupa, Joan; Cho, Christine; Mir, Mustafa; El Beheiry, Mohamed; Darzacq, Xavier; Nollmann, Marcelo; Dahan, Maxime; Wu, Carl; Lionnet, Timothée; Liddle, J Alexander; Bargmann, Cornelia I

    2016-03-01

    Multifocus microscopy (MFM) allows high-resolution instantaneous three-dimensional (3D) imaging and has been applied to study biological specimens ranging from single molecules inside cells nuclei to entire embryos. We here describe pattern designs and nanofabrication methods for diffractive optics that optimize the light-efficiency of the central optical component of MFM: the diffractive multifocus grating (MFG). We also implement a "precise color" MFM layout with MFGs tailored to individual fluorophores in separate optical arms. The reported advancements enable faster and brighter volumetric time-lapse imaging of biological samples. In live microscopy applications, photon budget is a critical parameter and light-efficiency must be optimized to obtain the fastest possible frame rate while minimizing photodamage. We provide comprehensive descriptions and code for designing diffractive optical devices, and a detailed methods description for nanofabrication of devices. Theoretical efficiencies of reported designs is ≈90% and we have obtained efficiencies of > 80% in MFGs of our own manufacture. We demonstrate the performance of a multi-phase MFG in 3D functional neuronal imaging in living C. elegans.

  2. Advanced technology development multi-color holography

    NASA Technical Reports Server (NTRS)

    Vikram, Chandra S.

    1993-01-01

    This is the final report of the Multi-color Holography project. The comprehensive study considers some strategic aspects of multi-color holography. First, various methods of available techniques for accurate fringe counting are reviewed. These are heterodyne interferometry, quasi-heterodyne interferometry, and phase-shifting interferometry. Phase-shifting interferometry was found to be the most suitable for multi-color holography. Details of experimentation with a sugar solution are also reported where better than 1/200 of a fringe order measurement capability was established. Rotating plate glass phase shifter was used for the experimentation. The report then describes the possible role of using more than two wavelengths with special reference-to-object beam intensity ratio needs in multicolor holography. Some specific two- and three-color cases are also described in detail. Then some new analysis methods of the reconstructed wavefront are considered. These are deflectometry, speckle metrology, confocal optical signal processing, and phase shifting technique related applications. Finally, design aspects of an experimental breadboard are presented.

  3. Multicolor Super-Resolution Fluorescence Imaging via Multi-Parameter Fluorophore Detection

    PubMed Central

    Bates, Mark; Dempsey, Graham T; Chen, Kok Hao; Zhuang, Xiaowei

    2012-01-01

    Understanding the complexity of the cellular environment will benefit from the ability to unambiguously resolve multiple cellular components, simultaneously and with nanometer-scale spatial resolution. Multicolor super-resolution fluorescence microscopy techniques have been developed to achieve this goal, yet challenges remain in terms of the number of targets that can be simultaneously imaged and the crosstalk between color channels. Herein, we demonstrate multicolor stochastic optical reconstruction microscopy (STORM) based on a multi-parameter detection strategy, which uses both the fluorescence activation wavelength and the emission color to discriminate between photo-activatable fluorescent probes. First, we obtained two-color super-resolution images using the near-infrared cyanine dye Alexa 750 in conjunction with a red cyanine dye Alexa 647, and quantified color crosstalk levels and image registration accuracy. Combinatorial pairing of these two switchable dyes with fluorophores which enhance photo-activation enabled multi-parameter detection of six different probes. Using this approach, we obtained six-color super-resolution fluorescence images of a model sample. The combination of multiple fluorescence detection parameters for improved fluorophore discrimination promises to substantially enhance our ability to visualize multiple cellular targets with sub-diffraction-limit resolution. PMID:22213647

  4. Global Temperature Measurement of Supercooled Water under Icing Conditions using Two-Color Luminescent Images and Multi-Band Filter

    NASA Astrophysics Data System (ADS)

    Tanaka, Mio; Morita, Katsuaki; Kimura, Shigeo; Sakaue, Hirotaka

    2012-11-01

    Icing occurs by a collision of a supercooled-water droplet on a surface. It can be seen in any cold area. A great attention is paid in an aircraft icing. To understand the icing process on an aircraft, it is necessary to give the temperature information of the supercooled water. A conventional technique, such as a thermocouple, is not valid, because it becomes a collision surface that accumulates ice. We introduce a dual-luminescent imaging to capture a global temperature distribution of supercooled water under the icing conditions. It consists of two-color luminescent probes and a multi-band filter. One of the probes is sensitive to the temperature and the other is independent of the temperature. The latter is used to cancel the temperature-independent luminescence of a temperature-dependent image caused by an uneven illumination and a camera location. The multi-band filter only selects the luminescent peaks of the probes to enhance the temperature sensitivity of the imaging system. By applying the system, the time-resolved temperature information of a supercooled-water droplet is captured.

  5. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  6. The fabrication of a multi-spectral lens array and its application in assisting color blindness

    NASA Astrophysics Data System (ADS)

    Di, Si; Jin, Jian; Tang, Guanrong; Chen, Xianshuai; Du, Ruxu

    2016-01-01

    This article presents a compact multi-spectral lens array and describes its application in assisting color-blindness. The lens array consists of 9 microlens, and each microlens is coated with a different color filter. Thus, it can capture different light bands, including red, orange, yellow, green, cyan, blue, violet, near-infrared, and the entire visible band. First, the fabrication process is described in detail. Second, an imaging system is setup and a color blindness testing card is selected as the sample. By the system, the vision results of normal people and color blindness can be captured simultaneously. Based on the imaging results, it is possible to be used for helping color-blindness to recover normal vision.

  7. A broadband terahertz ultrathin multi-focus lens

    PubMed Central

    He, Jingwen; Ye, Jiasheng; Wang, Xinke; Kan, Qiang; Zhang, Yan

    2016-01-01

    Ultrathin transmission metasurface devices are designed on the basis of the Yang-Gu amplitude-phase retrieval algorithm for focusing the terahertz (THz) radiation into four or nine spots with focal spacing of 2 or 3 mm at a frequency of 0.8 THz. The focal properties are experimentally investigated in detail, and the results agree well with the theoretical expectations. The designed THz multi-focus lens (TMFL) demonstrates a good focusing function over a broad frequency range from 0.3 to 1.1 THz. As a transmission-type device based on metasurface, the diffraction efficiency of the TMFL can be as high as 33.92% at the designed frequency. The imaging function of the TMFL is also demonstrated experimentally and clear images are obtained. The proposed method produces an ultrathin, low-cost, and broadband multi-focus lens for THz-band application PMID:27346430

  8. Color standardization in whole slide imaging using a color calibration slide

    PubMed Central

    Bautista, Pinky A.; Hashimoto, Noriaki; Yagi, Yukako

    2014-01-01

    Background: Color consistency in histology images is still an issue in digital pathology. Different imaging systems reproduced the colors of a histological slide differently. Materials and Methods: Color correction was implemented using the color information of the nine color patches of a color calibration slide. The inherent spectral colors of these patches along with their scanned colors were used to derive a color correction matrix whose coefficients were used to convert the pixels’ colors to their target colors. Results: There was a significant reduction in the CIELAB color difference, between images of the same H & E histological slide produced by two different whole slide scanners by 3.42 units, P < 0.001 at 95% confidence level. Conclusion: Color variations in histological images brought about by whole slide scanning can be effectively normalized with the use of the color calibration slide. PMID:24672739

  9. Image quality evaluation of medical color and monochrome displays using an imaging colorimeter

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses

  10. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data

  11. Image subregion querying using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2002-01-01

    A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.

  12. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  13. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  14. Distance preservation in color image transforms

    NASA Astrophysics Data System (ADS)

    Santini, Simone

    1999-12-01

    Most current image processing systems work on color images, and color is a precious perceptual clue for determining image similarity. Working with color images, however, is not the sam thing as working with images taking values in a 3D Euclidean space. Not only are color spaces bounded, but the characteristics of the observer endow the space with a 'perceptual' metric that in general does not correspond to the metric naturally inherited from R3. This paper studies the problem of filtering color images abstractly. It begins by determining the properties of the color sum and color product operations such that he desirable properties of orthonormal bases will be preserved. The paper then defines a general scheme, based on the action of the additive group on the color space, by which operations that satisfy the required properties can be defined.

  15. A fast color image enhancement algorithm based on Max Intensity Channel.

    PubMed

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-30

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  16. A fast color image enhancement algorithm based on Max Intensity Channel

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  17. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  18. Morphological rational multi-scale algorithm for color contrast enhancement

    NASA Astrophysics Data System (ADS)

    Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.

    2010-01-01

    Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.

  19. Secure information display with limited viewing zone by use of multi-color visual cryptography.

    PubMed

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2004-04-05

    We propose a display technique that ensures security of visual information by use of visual cryptography. A displayed image appears as a completely random pattern unless viewed through a decoding mask. The display has a limited viewing zone with the decoding mask. We have developed a multi-color encryption code set. Eight colors are represented in combinations of a displayed image composed of red, green, blue, and black subpixels and a decoding mask composed of transparent and opaque subpixels. Furthermore, we have demonstrated secure information display by use of an LCD panel.

  20. Spatial arrangement of color filter array for multispectral image acquisition

    NASA Astrophysics Data System (ADS)

    Shrestha, Raju; Hardeberg, Jon Y.; Khan, Rahat

    2011-03-01

    In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance. In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.

  1. Spectrally-encoded color imaging

    PubMed Central

    Kang, DongKyun; Yelin, Dvir; Bouma, Brett E.; Tearney, Guillermo J.

    2010-01-01

    Spectrally-encoded endoscopy (SEE) is a technique for ultraminiature endoscopy that encodes each spatial location on the sample with a different wavelength. One limitation of previous incarnations of SEE is that it inherently creates monochromatic images, since the spectral bandwidth is expended in the spatial encoding process. Here we present a spectrally-encoded imaging system that has color imaging capability. The new imaging system utilizes three distinct red, green, and blue spectral bands that are configured to illuminate the grating at different incident angles. By careful selection of the incident angles, the three spectral bands can be made to overlap on the sample. To demonstrate the method, a bench-top system was built, comprising a 2400-lpmm grating illuminated by three 525-μm-diameter beams with three different spectral bands. Each spectral band had a bandwidth of 75 nm, producing 189 resolvable points. A resolution target, color phantoms, and excised swine small intestine were imaged to validate the system's performance. The color SEE system showed qualitatively and quantitatively similar color imaging performance to that of a conventional digital camera. PMID:19688002

  2. Spatio-spectral color filter array design for optimal image recovery.

    PubMed

    Hirakawa, Keigo; Wolfe, Patrick J

    2008-10-01

    In digital imaging applications, data are typically obtained via a spatial subsampling procedure implemented as a color filter array-a physical construction whereby only a single color value is measured at each pixel location. Owing to the growing ubiquity of color imaging and display devices, much recent work has focused on the implications of such arrays for subsequent digital processing, including in particular the canonical demosaicking task of reconstructing a full color image from spatially subsampled and incomplete color data acquired under a particular choice of array pattern. In contrast to the majority of the demosaicking literature, we consider here the problem of color filter array design and its implications for spatial reconstruction quality. We pose this problem formally as one of simultaneously maximizing the spectral radii of luminance and chrominance channels subject to perfect reconstruction, and-after proving sub-optimality of a wide class of existing array patterns-provide a constructive method for its solution that yields robust, new panchromatic designs implementable as subtractive colors. Empirical evaluations on multiple color image test sets support our theoretical results, and indicate the potential of these patterns to increase spatial resolution for fixed sensor size, and to contribute to improved reconstruction fidelity as well as significantly reduced hardware complexity.

  3. Full-color, large area, transmissive holograms enabled by multi-level diffractive optics.

    PubMed

    Mohammad, Nabil; Meem, Monjurul; Wan, Xiaowen; Menon, Rajesh

    2017-07-19

    We show that multi-level diffractive microstructures can enable broadband, on-axis transmissive holograms that can project complex full-color images, which are invariant to viewing angle. Compared to alternatives like metaholograms, diffractive holograms utilize much larger minimum features (>10 µm), much smaller aspect ratios (<0.2) and thereby, can be fabricated in a single lithography step over relatively large areas (>30 mm ×30 mm). We designed, fabricated and characterized holograms that encode various full-color images. Our devices demonstrate absolute transmission efficiencies of >86% across the visible spectrum from 405 nm to 633 nm (peak value of about 92%), and excellent color fidelity. Furthermore, these devices do not exhibit polarization dependence. Finally, we emphasize that our devices exhibit negligible absorption and are phase-only holograms with high diffraction efficiency.

  4. Retrieving clinically relevant diabetic retinopathy images using a multi-class multiple-instance framework

    NASA Astrophysics Data System (ADS)

    Chandakkar, Parag S.; Venkatesan, Ragav; Li, Baoxin

    2013-02-01

    Diabetic retinopathy (DR) is a vision-threatening complication from diabetes mellitus, a medical condition that is rising globally. Unfortunately, many patients are unaware of this complication because of absence of symptoms. Regular screening of DR is necessary to detect the condition for timely treatment. Content-based image retrieval, using archived and diagnosed fundus (retinal) camera DR images can improve screening efficiency of DR. This content-based image retrieval study focuses on two DR clinical findings, microaneurysm and neovascularization, which are clinical signs of non-proliferative and proliferative diabetic retinopathy. The authors propose a multi-class multiple-instance image retrieval framework which deploys a modified color correlogram and statistics of steerable Gaussian Filter responses, for retrieving clinically relevant images from a database of DR fundus image database.

  5. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  6. Remote focusing for programmable multi-layer differential multiphoton microscopy

    PubMed Central

    Hoover, Erich E.; Young, Michael D.; Chandler, Eric V.; Luo, Anding; Field, Jeffrey J.; Sheetz, Kraig E.; Sylvester, Anne W.; Squier, Jeff A.

    2010-01-01

    We present the application of remote focusing to multiphoton laser scanning microscopy and utilize this technology to demonstrate simultaneous, programmable multi-layer imaging. Remote focusing is used to independently control the axial location of multiple focal planes that can be simultaneously imaged with single element detection. This facilitates volumetric multiphoton imaging in scattering specimens and can be practically scaled to a large number of focal planes. Further, it is demonstrated that the remote focusing control can be synchronized with the lateral scan directions, enabling imaging in orthogonal scan planes. PMID:21326641

  7. A fast color image enhancement algorithm based on Max Intensity Channel

    PubMed Central

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-01-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395

  8. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  9. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    NASA Astrophysics Data System (ADS)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  10. Rational chemical design of the next generation of molecular imaging probes based on physics and biology: mixing modalities, colors and signals

    PubMed Central

    Longmire, Michelle R.; Ogawa, Mikako; Choyke, Peter L.

    2012-01-01

    In recent years, numerous in vivo molecular imaging probes have been developed. As a consequence, much has been published on the design and synthesis of molecular imaging probes focusing on each modality, each type of material, or each target disease. More recently, second generation molecular imaging probes with unique, multi-functional, or multiplexed characteristics have been designed. This critical review focuses on (i) molecular imaging using combinations of modalities and signals that employ the full range of the electromagnetic spectra, (ii) optimized chemical design of molecular imaging probes for in vivo kinetics based on biology and physiology across a range of physical sizes, (iii) practical examples of second generation molecular imaging probes designed to extract complementary data from targets using multiple modalities, color, and comprehensive signals (277 references). PMID:21607237

  11. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original

  12. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods

    PubMed Central

    Hogervorst, Maarten A.; Pinkus, Alan R.

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4–0.7μm), near-infrared (NIR, 0.7–1.0μm) and long-wave infrared (LWIR, 8–14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be

  13. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    PubMed

    Toet, Alexander; Hogervorst, Maarten A; Pinkus, Alan R

    2016-01-01

    The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to

  14. Colorizing SENTINEL-1 SAR Images Using a Variational Autoencoder Conditioned on SENTINEL-2 Imagery

    NASA Astrophysics Data System (ADS)

    Schmitt, M.; Hughes, L. H.; Körner, M.; Zhu, X. X.

    2018-05-01

    In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.

  15. Multi-color phase imaging and sickle cell anemia (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hosseini, Poorya; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.

    2016-03-01

    Quantitative phase measurements at multiple wavelengths has created an opportunity for exploring new avenues in phase microscopy such as enhancing imaging-depth (1), measuring hemoglobin concentrations in erythrocytes (2), and more recently in tomographic mapping of the refractive index of live cells (3). To this end, quantitative phase imaging has been demonstrated both at few selected spectral points as well as with high spectral resolution (4,5). However, most of these developed techniques compromise imaging speed, field of view, or the spectral resolution to perform interferometric measurements at multiple colors. In the specific application of quantitative phase in studying blood diseases and red blood cells, current techniques lack the required sensitivity to quantify biological properties of interest at individual cell level. Recently, we have set out to develop a stable quantitative interferometric microscope allowing for measurements of such properties for red cells without compromising field of view or speed of the measurements. The feasibility of the approach will be initially demonstrated in measuring dispersion curves of known solutions, followed by measuring biological properties of red cells in sickle cell anemia. References: 1. Mann CJ, Bingham PR, Paquit VC, Tobin KW. Quantitative phase imaging by three-wavelength digital holography. Opt Express. 2008;16(13):9753-64. 2. Park Y, Yamauchi T, Choi W, Dasari R, Feld MS. Spectroscopic phase microscopy for quantifying hemoglobin concentrations in intact red blood cells. Opt Lett. 2009;34(23):3668-70. 3. Hosseini P, Sung Y, Choi Y, Lue N, Yaqoob Z, So P. Scanning color optical tomography (SCOT). Opt Express. 2015;23(15):19752-62. 4. Jung J-H, Jang J, Park Y. Spectro-refractometry of individual microscopic objects using swept-source quantitative phase imaging. Anal Chem. 2013;85(21):10519-25. 5. Rinehart M, Zhu Y, Wax A. Quantitative phase spectroscopy. Biomed Opt Express. 2012;3(5):958-65.

  16. Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging

    PubMed Central

    Joshi, Bishnu P.; Wang, Thomas D.

    2010-01-01

    Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839

  17. Color transfer between high-dynamic-range images

    NASA Astrophysics Data System (ADS)

    Hristova, Hristina; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi

    2015-09-01

    Color transfer methods alter the look of a source image with regards to a reference image. So far, the proposed color transfer methods have been limited to low-dynamic-range (LDR) images. Unlike LDR images, which are display-dependent, high-dynamic-range (HDR) images contain real physical values of the world luminance and are able to capture high luminance variations and finest details of real world scenes. Therefore, there exists a strong discrepancy between the two types of images. In this paper, we bridge the gap between the color transfer domain and the HDR imagery by introducing HDR extensions to LDR color transfer methods. We tackle the main issues of applying a color transfer between two HDR images. First, to address the nature of light and color distributions in the context of HDR imagery, we carry out modifications of traditional color spaces. Furthermore, we ensure high precision in the quantization of the dynamic range for histogram computations. As image clustering (based on light and colors) proved to be an important aspect of color transfer, we analyze it and adapt it to the HDR domain. Our framework has been applied to several state-of-the-art color transfer methods. Qualitative experiments have shown that results obtained with the proposed adaptation approach exhibit less artifacts and are visually more pleasing than results obtained when straightforwardly applying existing color transfer methods to HDR images.

  18. Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)

    2018-01-01

    A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.

  19. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  20. Color Image of Pluto

    NASA Image and Video Library

    2015-12-31

    Pluto nearly fills the frame in this image from the Long Range Reconnaissance Imager (LORRI) aboard New Horizons, taken on July 13, 2015, when the spacecraft was 476,000 miles (768,000 kilometers) from the surface. This is the last and most detailed image sent to Earth before the spacecraft's closest approach to Pluto on July 14. The color image has been combined with lower-resolution color information from the Ralph instrument that was acquired earlier on July 13. http://photojournal.jpl.nasa.gov/catalog/PIA20291

  1. Frequency division multiplexed multi-color fluorescence microscope system

    NASA Astrophysics Data System (ADS)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame

  2. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  3. Multiplexing T- and B-Cell FLUOROSPOT Assays: Experimental Validation of the Multi-Color ImmunoSpot® Software Based on Center of Mass Distance Algorithm.

    PubMed

    Karulin, Alexey Y; Megyesi, Zoltán; Caspell, Richard; Hanson, Jodi; Lehmann, Paul V

    2018-01-01

    Over the past decade, ELISPOT has become a highly implemented mainstream assay in immunological research, immune monitoring, and vaccine development. Unique single cell resolution along with high throughput potential sets ELISPOT apart from flow cytometry, ELISA, microarray- and bead-based multiplex assays. The necessity to unambiguously identify individual T and B cells that do, or do not co-express certain analytes, including polyfunctional cytokine producing T cells has stimulated the development of multi-color ELISPOT assays. The success of these assays has also been driven by limited sample/cell availability and resource constraints with reagents and labor. There are few commercially available test kits and instruments available at present for multi-color FLUOROSPOT. Beyond commercial descriptions of competing systems, little is known about their accuracy in experimental settings detecting individual cells that secrete multiple analytes vs. random overlays of spots. Here, we present a theoretical and experimental validation study for three and four color T- and B-cell FLUOROSPOT data analysis. The ImmunoSpot ® Fluoro-X™ analysis system we used includes an automatic image acquisition unit that generates individual color images free of spectral overlaps and multi-color spot counting software based on the maximal allowed distance between centers of spots of different colors or Center of Mass Distance (COMD). Using four color B-cell FLUOROSPOT for IgM, IgA, IgG1, IgG3; and three/four color T-cell FLUOROSPOT for IL-2, IFN-γ, TNF-α, and GzB, in serial dilution experiments, we demonstrate the validity and accuracy of Fluoro-X™ multi-color spot counting algorithms. Statistical predictions based on the Poisson spatial distribution, coupled with scrambled image counting, permit objective correction of true multi-color spot counts to exclude randomly overlaid spots.

  4. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of

  5. Color-encoded distance for interactive focus positioning in laser microsurgery

    NASA Astrophysics Data System (ADS)

    Schoob, Andreas; Kundrat, Dennis; Lekon, Stefan; Kahrs, Lüder A.; Ortmaier, Tobias

    2016-08-01

    This paper presents a real-time method for interactive focus positioning in laser microsurgery. Registration of stereo vision and a surgical laser is performed in order to combine surgical scene and laser workspace information. In particular, stereo image data is processed to three-dimensionally reconstruct observed tissue surface as well as to compute and to highlight its intersection with the laser focal range. Regarding the surgical live view, three augmented reality concepts are presented providing visual feedback during manual focus positioning. A user study is performed and results are discussed with respect to accuracy and task completion time. Especially when using color-encoded distance superimposed to the live view, target positioning with sub-millimeter accuracy can be achieved in a few seconds. Finally, transfer to an intraoperative scenario with endoscopic human in vivo and cadaver images is discussed demonstrating the applicability of the image overlay in laser microsurgery.

  6. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  7. Image color reduction method for color-defective observers using a color palette composed of 20 particular colors

    NASA Astrophysics Data System (ADS)

    Sakamoto, Takashi

    2015-01-01

    This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.

  8. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  9. Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images

    NASA Astrophysics Data System (ADS)

    Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.

    2016-07-01

    In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.

  10. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  11. Utilizing typical color appearance models to represent perceptual brightness and colorfulness for digital images

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao

    2016-12-01

    This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.

  12. Multi-color fluorescence imaging of sub-cellular dynamics of cancer cells in live mice

    NASA Astrophysics Data System (ADS)

    Hoffman, Robert M.

    2006-02-01

    We have genetically engineered dual-color fluorescent cells with one color in the nucleus and the other in the cytoplasm that enables real-time nuclear-cytoplasmic dynamics to be visualized in living cells in the cytoplasm in vivo as well as in vitro. To obtain the dual-color cells, red fluorescent protein (RFP) was expressed of the cancer cells, and green fluorescent protein (GFP) linked to histone H2B was expressed in the nucleus. Mitotic cells were visualized by whole-body imaging after injection in the mouse ear. Common carotid artery or heart injection of dual-color cells and a reversible skin flap enabled the external visualization of the dual-color cells in microvessels in the mouse where extreme elongation of the cell body as well as the nucleus occurred. The migration velocities of the dual-color cancer cells in the capillaries were measured by capturing individual images of the dual-color fluorescent cells over time. Human HCT-116-GFP-RFP colon cancer and mouse mammary tumor (MMT)-GFP-RFP cells were injected in the portal vein of nude mice. Extensive clasmocytosis (destruction of the cytoplasm) of the HCT-116-GFP-RFP cells occurred within 6 hours. The data suggest rapid death of HCT-116-GFP-RFP cells in the portal vein. In contrast, MMT-GFP-RFP cells injected into the portal vein mostly survived and formed colonies in the liver. However, when the host mice were pretreated with cyclophosphamide, the HCT-116-GFP-RFP cells also survived and formed colonies in the liver after portal vein injection. These results suggest that a cyclophosphamide-sensitive host cellular system attacked the HCT-116-GFP-RFP cells but could not effectively kill the MMT-GFP-RFP cells. With the ability to continuously image cancer cells at the subcellular level in the live animal, our understanding of the complex steps of metastasis will significantly increase. In addition, new drugs can be developed to target these newly visible steps of metastasis.

  13. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  14. Using color and grayscale images to teach histology to color-deficient medical students.

    PubMed

    Rubin, Lindsay R; Lackey, Wendy L; Kennedy, Frances A; Stephenson, Robert B

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness") in the general population, it is likely that this reliance upon color differentiation poses a significant obstacle for several medical students beginning a course of study that includes examination of histologic slides. In the past, first-year medical students at Michigan State University who identified themselves as color deficient were encouraged to use color transparency overlays or tinted contact lenses to filter out problematic colors. Recently, however, we have offered such students a computer monitor adjusted to grayscale for in-lab work, as well as grayscale copies of color photomicrographs for examination purposes. Grayscale images emphasize the texture of tissues and the contrasts between tissues as the students learn histologic architecture. Using this approach, color-deficient students have quickly learned to compensate for their deficiency by focusing on cell and tissue structure rather than on color variation. Based upon our experience with color-deficient students, we believe that grayscale photomicrographs may also prove instructional for students with normal (trichromatic) color vision, by encouraging them to consider structural characteristics of cells and tissues that may otherwise be overshadowed by stain colors.

  15. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  16. Multi-Color QWIP FPAs for Hyperspectral Thermal Emission Instruments

    NASA Technical Reports Server (NTRS)

    Soibel, Alexander; Luong, Ed; Mumolo, Jason M.; Liu, John; Rafol, Sir B.; Keo, Sam A.; Johnson, William; Willson, Dan; Hill, Cory J.; Ting, David Z.-Y.; hide

    2012-01-01

    Infrared focal plane arrays (FPAs) covering broad mid- and long-IR spectral ranges are the central parts of the spectroscopic and imaging instruments in several Earth and planetary science missions. To be implemented in the space instrument these FPAs need to be large-format, uniform, reproducible, low-cost, low 1/f noise, and radiation hard. Quantum Well Infrared Photodetectors (QWIPs), which possess all needed characteristics, have a great potential for implementation in the space instruments. However a standard QWIP has only a relatively narrow spectral coverage. A multi-color QWIP, which is compromised of two or more detector stacks, can to be used to cover the broad spectral range of interest. We will discuss our recent work on development of multi-color QWIP for Hyperspectral Thermal Emission Spectrometer instruments. We developed QWIP compromising of two stacks centered at 9 and 10.5 ?m, and featuring 9 grating regions optimized to maximize the responsivity in the individual subbands across the 7.5-12 ?m spectral range. The demonstrated 1024x1024 QWIP FPA exhibited excellent performance with operability exceeding 99% and noise equivalent differential temperature of less than 15 mK across the entire 7.5-12 ?m spectral range.

  17. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  18. Full-field dual-color 100-nm super-resolution imaging reveals organization and dynamics of mitochondrial and ER networks.

    PubMed

    Brunstein, Maia; Wicker, Kai; Hérault, Karine; Heintzmann, Rainer; Oheim, Martin

    2013-11-04

    Most structured illumination microscopes use a physical or synthetic grating that is projected into the sample plane to generate a periodic illumination pattern. Albeit simple and cost-effective, this arrangement hampers fast or multi-color acquisition, which is a critical requirement for time-lapse imaging of cellular and sub-cellular dynamics. In this study, we designed and implemented an interferometric approach allowing large-field, fast, dual-color imaging at an isotropic 100-nm resolution based on a sub-diffraction fringe pattern generated by the interference of two colliding evanescent waves. Our all-mirror-based system generates illumination pat-terns of arbitrary orientation and period, limited only by the illumination aperture (NA = 1.45), the response time of a fast, piezo-driven tip-tilt mirror (10 ms) and the available fluorescence signal. At low µW laser powers suitable for long-period observation of life cells and with a camera exposure time of 20 ms, our system permits the acquisition of super-resolved 50 µm by 50 µm images at 3.3 Hz. The possibility it offers for rapidly adjusting the pattern between images is particularly advantageous for experiments that require multi-scale and multi-color information. We demonstrate the performance of our instrument by imaging mitochondrial dynamics in cultured cortical astrocytes. As an illustration of dual-color excitation dual-color detection, we also resolve interaction sites between near-membrane mitochondria and the endoplasmic reticulum. Our TIRF-SIM microscope provides a versatile, compact and cost-effective arrangement for super-resolution imaging, allowing the investigation of co-localization and dynamic interactions between organelles--important questions in both cell biology and neurophysiology.

  19. Calibration Image of Earth by Mars Color Imager

    NASA Image and Video Library

    2005-08-22

    Three days after the Mars Reconnaissance Orbiter Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon.

  20. #TheDress: Categorical perception of an ambiguous color image.

    PubMed

    Lafer-Sousa, Rosa; Conway, Bevil R

    2017-10-01

    We present a full analysis of data from our preliminary report (Lafer-Sousa, Hermann, & Conway, 2015) and test whether #TheDress image is multistable. A multistable image must give rise to more than one mutually exclusive percept, typically within single individuals. Clustering algorithms of color-matching data showed that the dress was seen categorically, as white/gold (W/G) or blue/black (B/K), with a blue/brown transition state. Multinomial regression predicted categorical labels. Consistent with our prior hypothesis, W/G observers inferred a cool illuminant, whereas B/K observers inferred a warm illuminant; moreover, subjects could use skin color alone to infer the illuminant. The data provide some, albeit weak, support for our hypothesis that day larks see the dress as W/G and night owls see it as B/K. About half of observers who were previously familiar with the image reported switching categories at least once. Switching probability increased with professional art experience. Priming with an image that disambiguated the dress as B/K biased reports toward B/K (priming with W/G had negligible impact); furthermore, knowledge of the dress's true colors and any prior exposure to the image shifted the population toward B/K. These results show that some people have switched their perception of the dress. Finally, consistent with a role of attention and local image statistics in determining how multistable images are seen, we found that observers tended to discount as achromatic the dress component that they did not attend to: B/K reporters focused on a blue region, whereas W/G reporters focused on a golden region.

  1. #TheDress: Categorical perception of an ambiguous color image

    PubMed Central

    Lafer-Sousa, Rosa; Conway, Bevil R.

    2017-01-01

    We present a full analysis of data from our preliminary report (Lafer-Sousa, Hermann, & Conway, 2015) and test whether #TheDress image is multistable. A multistable image must give rise to more than one mutually exclusive percept, typically within single individuals. Clustering algorithms of color-matching data showed that the dress was seen categorically, as white/gold (W/G) or blue/black (B/K), with a blue/brown transition state. Multinomial regression predicted categorical labels. Consistent with our prior hypothesis, W/G observers inferred a cool illuminant, whereas B/K observers inferred a warm illuminant; moreover, subjects could use skin color alone to infer the illuminant. The data provide some, albeit weak, support for our hypothesis that day larks see the dress as W/G and night owls see it as B/K. About half of observers who were previously familiar with the image reported switching categories at least once. Switching probability increased with professional art experience. Priming with an image that disambiguated the dress as B/K biased reports toward B/K (priming with W/G had negligible impact); furthermore, knowledge of the dress's true colors and any prior exposure to the image shifted the population toward B/K. These results show that some people have switched their perception of the dress. Finally, consistent with a role of attention and local image statistics in determining how multistable images are seen, we found that observers tended to discount as achromatic the dress component that they did not attend to: B/K reporters focused on a blue region, whereas W/G reporters focused on a golden region. PMID:29090319

  2. Color transfer algorithm in medical images

    NASA Astrophysics Data System (ADS)

    Wang, Weihong; Xu, Yangfa

    2007-12-01

    In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and brightness between a group of images of a certain organ could be quite different. The quality of these images could bring great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper introduces the principle of this algorithm and uses it in the medical image processing.

  3. An effective image classification method with the fusion of invariant feature and a new color descriptor

    NASA Astrophysics Data System (ADS)

    Mansourian, Leila; Taufik Abdullah, Muhamad; Nurliyana Abdullah, Lili; Azman, Azreen; Mustaffa, Mas Rina

    2017-02-01

    Pyramid Histogram of Words (PHOW), combined Bag of Visual Words (BoVW) with the spatial pyramid matching (SPM) in order to add location information to extracted features. However, different PHOW extracted from various color spaces, and they did not extract color information individually, that means they discard color information, which is an important characteristic of any image that is motivated by human vision. This article, concatenated PHOW Multi-Scale Dense Scale Invariant Feature Transform (MSDSIFT) histogram and a proposed Color histogram to improve the performance of existing image classification algorithms. Performance evaluation on several datasets proves that the new approach outperforms other existing, state-of-the-art methods.

  4. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  5. An efficient method for the fusion of light field refocused images

    NASA Astrophysics Data System (ADS)

    Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei

    2018-04-01

    Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.

  6. Content-based quality evaluation of color images: overview and proposals

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine

    2003-12-01

    The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.

  7. Quality assessment of color images based on the measure of just noticeable color difference

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien; Hsu, Yun-Hsiang

    2014-01-01

    Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.

  8. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.

    PubMed

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M

    2018-04-12

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.

  9. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes

    PubMed Central

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.

    2018-01-01

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114

  10. Evaluation by medical students of the educational value of multi-material and multi-colored three-dimensional printed models of the upper limb for anatomical education.

    PubMed

    Mogali, Sreenivasulu Reddy; Yeong, Wai Yee; Tan, Heang Kuan Joel; Tan, Gerald Jit Shen; Abrahams, Peter H; Zary, Nabil; Low-Beer, Naomi; Ferenczi, Michael Alan

    2018-01-01

    For centuries, cadaveric material has been the cornerstone of anatomical education. For reasons of changes in curriculum emphasis, cost, availability, expertise, and ethical concerns, several medical schools have replaced wet cadaveric specimens with plastinated prosections, plastic models, imaging, and digital models. Discussions about the qualities and limitations of these alternative teaching resources are on-going. We hypothesize that three-dimensional printed (3DP) models can replace or indeed enhance existing resources for anatomical education. A novel multi-colored and multi-material 3DP model of the upper limb was developed based on a plastinated upper limb prosection, capturing muscles, nerves, arteries and bones with a spatial resolution of ∼1 mm. This study aims to examine the educational value of the 3DP model from the learner's point of view. Students (n = 15) compared the developed 3DP models with the plastinated prosections, and provided their views on their learning experience using 3DP models using a survey and focus group discussion. Anatomical features in 3DP models were rated as accurate by all students. Several positive aspects of 3DP models were highlighted, such as the color coding by tissue type, flexibility and that less care was needed in the handling and examination of the specimen than plastinated specimens which facilitated the appreciation of relations between the anatomical structures. However, students reported that anatomical features in 3DP models are less realistic compared to the plastinated specimens. Multi-colored, multi-material 3DP models are a valuable resource for anatomical education and an excellent adjunct to wet cadaveric or plastinated prosections. Anat Sci Educ 11: 54-64. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  11. Color dithering methods for LEGO-like 3D printing

    NASA Astrophysics Data System (ADS)

    Sun, Pei-Li; Sie, Yuping

    2015-01-01

    Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.

  12. Advanced technology development multi-color holography

    NASA Technical Reports Server (NTRS)

    Vikram, Chandra S.

    1994-01-01

    Several key aspects of multi-color holography and some non-conventional ways to study the holographic reconstructions are considered. The error analysis of three-color holography is considered in detail with particular example of a typical triglycine sulfate crystal growth situation. For the numerical analysis of the fringe patterns, a new algorithm is introduced with experimental verification using sugar-water solution. The role of the phase difference among component holograms is also critically considered with examples of several two- and three-color situations. The status of experimentation on two-color holography and fabrication of a small breadboard system is also reported. Finally, some successful demonstrations of unconventional ways to study holographic reconstructions are described. These methods are deflectometry and confocal optical processing using some Spacelab III holograms.

  13. A target detection multi-layer matched filter for color and hyperspectral cameras

    NASA Astrophysics Data System (ADS)

    Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.

    2018-05-01

    In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.

  14. New false color mapping for image fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Walraven, Jan

    1996-03-01

    A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).

  15. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  16. Practical three color live cell imaging by widefield microscopy

    PubMed Central

    Xia, Jianrun; Kim, Song Hon H.; Macmillan, Susan

    2006-01-01

    Live cell fluorescence microscopy using fluorescent protein tags derived from jellyfish and coral species has been a successful tool to image proteins and dynamics in many species. Multi-colored aequorea fluorescent protein (AFP) derivatives allow investigators to observe multiple proteins simultaneously, but overlapping spectral properties sometimes require the use of sophisticated and expensive microscopes. Here, we show that the aequorea coerulescens fluorescent protein derivative, PS-CFP2 has excellent practical properties as a blue fluorophore that are distinct from green or red fluorescent proteins and can be imaged with standard filter sets on a widefield microscope. We also find that by widefield illumination in live cells, that PS-CFP2 is very photostable. When fused to proteins that form concentrated puncta in either the cytoplasm or nucleus, PSCFP2 fusions do not artifactually interact with other AFP fusion proteins, even at very high levels of over-expression. PSCFP2 is therefore a good blue fluorophore for distinct three color imaging along with eGFP and mRFP using a relatively simple and inexpensive microscope. PMID:16909160

  17. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  18. Single layer multi-color luminescent display and method of making

    NASA Technical Reports Server (NTRS)

    Robertson, James B. (Inventor)

    1992-01-01

    The invention is a multi-color luminescent display comprising an insulator substrate and a single layer of host material, which may be a phosphor deposited thereon that hosts one or more different impurities, therein forming a pattern of selected and distinctly colored phosphors such as blue, green, and red phosphors in a single layer of host material. Transparent electrical conductor means may be provided for subjecting selected portions of the pattern of colored phosphors to an electric field, thereby forming a multi-color, single layer electroluminescent display. A method of forming a multi-color luminescent display includes the steps of depositing on an insulator substrate a single layer of host material, which itself may be a phosphor, with the properties to host varying quantities of different impurities and introducing one or more of said different impurities into selected areas of the said single layer of host material by thermal diffusion or ion implantation to form a pattern of phosphors of different colors in the said single layer of host material.

  19. Multi exposure image fusion algorithm based on YCbCr space

    NASA Astrophysics Data System (ADS)

    Yang, T. T.; Fang, P. Y.

    2018-05-01

    To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.

  20. Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortex

    PubMed Central

    Lafer-Sousa, Rosa; Conway, Bevil R.

    2014-01-01

    Visual-object processing culminates in inferior temporal (IT) cortex. To assess the organization of IT, we measured fMRI responses in alert monkey to achromatic images (faces, fruit, bodies, places) and colored gratings. IT contained multiple color-biased regions, which were typically ventral to face patches and, remarkably, yoked to them, spaced regularly at four locations predicted by known anatomy. Color and face selectivity increased for more anterior regions, indicative of a broad hierarchical arrangement. Responses to non-face shapes were found across IT, but were stronger outside color-biased regions and face patches, consistent with multiple parallel streams. IT also contained multiple coarse eccentricity maps: face patches overlapped central representations; color-biased regions spanned mid-peripheral representations; and place-biased regions overlapped peripheral representations. These results suggest that IT comprises parallel, multi-stage processing networks subject to one organizing principle. PMID:24141314

  1. Pseudo color ghost coding imaging with pseudo thermal light

    NASA Astrophysics Data System (ADS)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  2. An Underwater Color Image Quality Evaluation Metric.

    PubMed

    Yang, Miao; Sowmya, Arcot

    2015-12-01

    Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

  3. Achieving consistent color and grayscale presentation on medial color displays

    NASA Astrophysics Data System (ADS)

    Fan, Jiahua; Roehrig, Hans; Dallas, William; Krupinski, Elizabeth A.

    2008-03-01

    Color displays are increasingly used for medical imaging, replacing the traditional monochrome displays in radiology for multi-modality applications, 3D representation applications, etc. Color displays are also used increasingly because of wide spread application of Tele-Medicine, Tele-Dermatology and Digital Pathology. At this time, there is no concerted effort for calibration procedures for this diverse range of color displays in Telemedicine and in other areas of the medical field. Using a colorimeter to measure the display luminance and chrominance properties as well as some processing software we developed a first attempt to a color calibration protocol for the medical imaging field.

  4. Theoretical design of multi-colored semi-transparent organic solar cells with both efficient color filtering and light harvesting

    PubMed Central

    Wen, Long; Chen, Qin; Sun, Fuhe; Song, Shichao; Jin, Lin; Yu, Yan

    2014-01-01

    Solar cells incorporated with multi-coloring capability not only offer an aesthetic solution to bridge the gap between solar modules and building decorations but also open up the possibility for self-powered colorful display. In this paper, we proposed a multi-colored semi-transparent organic solar cells (TOSCs) design containing metallic nanostructures with the both high color purity and efficiency based on theoretical considerations. By employing guided mode resonance effect, the multi-colored TOSC behave like an efficient color filter that selectively transmits light with the desired wavelengths and generates electricity with light of other wavelengths. Broad range of coloring and luminosity adjusting for the transmission light can be achieved by simply tuning the period and the duty cycle of the metallic nanostructures. Furthermore, accompanying with the efficient color filtering characteristics, the optical absorption of TOSCs was improved due to the marked suppression of transmission loss at the off-resonance wavelengths and the increased light trapping in TOSCs. The mechanisms of the light guiding in photoactive layer and broadband backward scattering from the metallic nanostructures were identified to make an essential contribution to the improved light-harvesting. By enabling efficient color control and high efficiency simultaneously, this approach holds great promise for future versatile photovoltaic energy utilization. PMID:25391756

  5. Theoretical design of multi-colored semi-transparent organic solar cells with both efficient color filtering and light harvesting.

    PubMed

    Wen, Long; Chen, Qin; Sun, Fuhe; Song, Shichao; Jin, Lin; Yu, Yan

    2014-11-13

    Solar cells incorporated with multi-coloring capability not only offer an aesthetic solution to bridge the gap between solar modules and building decorations but also open up the possibility for self-powered colorful display. In this paper, we proposed a multi-colored semi-transparent organic solar cells (TOSCs) design containing metallic nanostructures with the both high color purity and efficiency based on theoretical considerations. By employing guided mode resonance effect, the multi-colored TOSC behave like an efficient color filter that selectively transmits light with the desired wavelengths and generates electricity with light of other wavelengths. Broad range of coloring and luminosity adjusting for the transmission light can be achieved by simply tuning the period and the duty cycle of the metallic nanostructures. Furthermore, accompanying with the efficient color filtering characteristics, the optical absorption of TOSCs was improved due to the marked suppression of transmission loss at the off-resonance wavelengths and the increased light trapping in TOSCs. The mechanisms of the light guiding in photoactive layer and broadband backward scattering from the metallic nanostructures were identified to make an essential contribution to the improved light-harvesting. By enabling efficient color control and high efficiency simultaneously, this approach holds great promise for future versatile photovoltaic energy utilization.

  6. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    NASA Astrophysics Data System (ADS)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  7. Guided color consistency optimization for image mosaicking

    NASA Astrophysics Data System (ADS)

    Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li

    2018-01-01

    This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.

  8. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  9. Extending Whole Slide Imaging: Color Darkfield Internal Reflection Illumination (DIRI) for Biological Applications

    PubMed Central

    Namiki, Kana; Miyawaki, Atsushi; Ishikawa, Takuji

    2017-01-01

    Whole slide imaging (WSI) is a useful tool for multi-modal imaging, and in our work, we have often combined WSI with darkfield microscopy. However, traditional darkfield microscopy cannot use a single condenser to support high- and low-numerical-aperture objectives, which limits the modality of WSI. To overcome this limitation, we previously developed a darkfield internal reflection illumination (DIRI) microscope using white light-emitting diodes (LEDs). Although the developed DIRI is useful for biological applications, substantial problems remain to be resolved. In this study, we propose a novel illumination technique called color DIRI. The use of three-color LEDs dramatically improves the capability of the system, such that color DIRI (1) enables optimization of the illumination color; (2) can be combined with an oil objective lens; (3) can produce fluorescence excitation illumination; (4) can adjust the wavelength of light to avoid cell damage or reactions; and (5) can be used as a photostimulator. These results clearly illustrate that the proposed color DIRI can significantly extend WSI modalities for biological applications. PMID:28085892

  10. Monolithic multi-color light emission/detection device

    DOEpatents

    Wanlass, Mark W.

    1995-01-01

    A single-crystal, monolithic, tandem, multi-color optical transceiver device is described, including (a) an InP substrate having upper and lower surfaces, (b) a first junction on the upper surface of the InP substrate, (c) a second junction on the first junction. The first junction is preferably GaInAsP of defined composition, and the second junction is preferably InP. The two junctions are lattice matched. The second junction has a larger energy band gap than the first junction. Additional junctions having successively larger energy band gaps may be included. The device is capable of simultaneous and distinct multi-color emission and detection over a single optical fiber.

  11. Monolithic multi-color light emission/detection device

    DOEpatents

    Wanlass, M.W.

    1995-02-21

    A single-crystal, monolithic, tandem, multi-color optical transceiver device is described, including (a) an InP substrate having upper and lower surfaces, (b) a first junction on the upper surface of the InP substrate, (c) a second junction on the first junction. The first junction is preferably GaInAsP of defined composition, and the second junction is preferably InP. The two junctions are lattice matched. The second junction has a larger energy band gap than the first junction. Additional junctions having successively larger energy band gaps may be included. The device is capable of simultaneous and distinct multi-color emission and detection over a single optical fiber. 5 figs.

  12. How Phoenix Creates Color Images (Animation)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This simple animation shows how a color image is made from images taken by Phoenix.

    The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists.

    By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  13. Visual wetness perception based on image color statistics.

    PubMed

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  14. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue

  15. Fast Fourier transform-based Retinex and alpha-rooting color image enhancement

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.; Gonzales, Analysa M.

    2015-05-01

    Efficiency in terms of both accuracy and speed is highly important in any system, especially when it comes to image processing. The purpose of this paper is to improve an existing implementation of multi-scale retinex (MSR) by utilizing the fast Fourier transforms (FFT) within the illumination estimation step of the algorithm to improve the speed at which Gaussian blurring filters were applied to the original input image. In addition, alpha-rooting can be used as a separate technique to achieve a sharper image in order to fuse its results with those of the retinex algorithm for the sake of achieving the best image possible as shown by the values of the considered color image enhancement measure (EMEC).

  16. Citrus fruit recognition using color image analysis

    NASA Astrophysics Data System (ADS)

    Xu, Huirong; Ying, Yibin

    2004-10-01

    An algorithm for the automatic recognition of citrus fruit on the tree was developed. Citrus fruits have different color with leaves and branches portions. Fifty-three color images with natural citrus-grove scenes were digitized and analyzed for red, green, and blue (RGB) color content. The color characteristics of target surfaces (fruits, leaves, or branches) were extracted using the range of interest (ROI) tool. Several types of contrast color indices were designed and tested. In this study, the fruit image was enhanced using the (R-B) contrast color index because results show that the fruit have the highest color difference among the objects in the image. A dynamic threshold function was derived from this color model and used to distinguish citrus fruit from background. The results show that the algorithm worked well under frontlighting or backlighting condition. However, there are misclassifications when the fruit or the background is under a brighter sunlight.

  17. A robust color image fusion for low light level and infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang

    2016-09-01

    The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.

  18. Image quality evaluation of color displays using a Fovean color camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro

    2007-03-01

    This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  19. Spatial imaging in color and HDR: prometheus unchained

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  20. Local adaptive contrast enhancement for color images

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer

    2007-04-01

    A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.

  1. MUNSELL COLOR ANALYSIS OF LANDSAT COLOR-RATIO-COMPOSITE IMAGES OF LIMONITIC AREAS IN SOUTHWEST NEW MEXICO.

    USGS Publications Warehouse

    Kruse, Fred A.

    1984-01-01

    Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.

  2. Color standardization and optimization in whole slide imaging.

    PubMed

    Yagi, Yukako

    2011-03-30

    Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.

  3. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images.

    PubMed

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K; Schad, Lothar R; Zöllner, Frank Gerrit

    2015-01-01

    Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin-3,3'-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics.

  4. Example-Based Image Colorization Using Locality Consistent Sparse Representation.

    PubMed

    Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L

    2017-11-01

    Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.

  5. Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.

    2016-02-01

    Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.

  6. Color image fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Toet, Alexander

    2003-09-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.

  7. The Artist, the Color Copier, and Digital Imaging.

    ERIC Educational Resources Information Center

    Witte, Mary Stieglitz

    The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…

  8. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images

    PubMed Central

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit

    2015-01-01

    Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571

  9. Multiple Auto-Adapting Color Balancing for Large Number of Images

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-04-01

    This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.

  10. Intricate Plasma-Scattered Images and Spectra of Focused Femtosecond Laser Pulses

    PubMed Central

    Ooi, C. H. Raymond; Talib, Md. Ridzuan

    2016-01-01

    We report on some interesting phenomena in the focusing and scattering of femtosecond laser pulses in free space that provide insights on intense laser plasma interactions. The scattered image in the far field is analyzed and the connection with the observed structure of the plasma at the focus is discussed. We explain the physical mechanisms behind the changes in the colorful and intricate image formed by scattering from the plasma for different compressions, as well as orientations of plano-convex lens. The laser power does not show significant effect on the images. The pulse repetition rate above 500 Hz can affect the image through slow dynamics The spectrum of each color in the image shows oscillatory peaks due to interference of delayed pulse that correlate with the plasma length. Spectral lines of atomic species are identified and new peaks are observed through the white light emitted by the plasma spot. We find that an Ar gas jet can brighten the white light of the plasma spot and produce high resolution spectral peaks. The intricate image is found to be extremely sensitive and this is useful for applications in sensing microscale objects. PMID:27571644

  11. Mobile Image Based Color Correction Using Deblurring

    PubMed Central

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2016-01-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space. PMID:28572697

  12. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  13. Multi-parametric monitoring of high intensity focused ultrasound (HIFU) treatment using harmonic motion imaging for focused ultrasound (HMIFU)

    NASA Astrophysics Data System (ADS)

    Hou, Gary Y.; Marquet, Fabrice; Wang, Shutao; Konofagou, Elisa

    2012-11-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed high-intensity focused ultrasound (HIFU) treatment monitoring method with feasibilities demonstrated in vitro and in vivo. Here, a multi-parametric study is performed to investigate both elastic and acoustics-independent viscoelastic tissue changes using the Harmonic Motion Imaging (HMI) displacement, axial compressive strain and relative phase-shift during high energy HIFU where tissue boiling occurs. Forty three (n=18) thermal lesions were formed in ex vivo canine liver specimens. Two dimensional (2D) transverse HMI displacement maps were also obtained before and after lesion formation. The same method was repeated in 10-, 20-and 30-s HIFU durations at three different acoustic powers of 8, 10, and 11W. For the 10-, 20-, and 30-s treatment cases, a steady decrease in the displacement (-8.67±4.80, -14.44±7.77, 24.03±12.11μm), compressive strain -0.16±0.06, -0.71±0.30, -0.68±0.36 %, and phase shift +1.80±6.80, -15.80±9.44, -18.62±13.14 ° were obtained, respectively, indicating overall increase of relative stiffness and decrease of the viscosity-to-stiffness ratio during heating. After treatment, 2D HMI displacement images of the thermal lesions showed an increased lesion-to-background contrast of 1.34±0.19, 1.98±0.30, 2.26±0.80 and lesion size of 40.95±8.06, 47.6±4.87, and 52.23±2.19 mm2, respectively, which was validated again with pathology 25.17±6.99, 42.17±1.77, 47.17±3.10 mm2. Additionally, studies also investigated the performance of mutli-parametric monitoring under the influence of boiling and attenuation change due to tissue boiling, where discrepancies were found such as deteriorated displacement SNR and reversed lesion-to-background displacement contrast with indication on possible increase in attenuation and tissue gelatification or pulverization. Despite the challenge of the boiling mechanism, the relative phase shift served as consist biomechanical tissue

  14. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  15. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  16. Influence of imaging resolution on color fidelity in digital archiving.

    PubMed

    Zhang, Pengchang; Toque, Jay Arre; Ide-Ektessabi, Ari

    2015-11-01

    Color fidelity is of paramount importance in digital archiving. In this paper, the relationship between color fidelity and imaging resolution was explored by calculating the color difference of an IT8.7/2 color chart with a CIELAB color difference formula for scanning and simulation images. Microscopic spatial sampling was used in selecting the image pixels for the calculations to highlight the loss of color information. A ratio, called the relative imaging definition (RID), was defined to express the correlation between image resolution and color fidelity. The results show that in order for color differences to remain unrecognizable, the imaging resolution should be at least 10 times higher than the physical dimension of the smallest feature in the object being studied.

  17. Color (RGB) imaging laser radar

    NASA Astrophysics Data System (ADS)

    Ferri De Collibus, M.; Bartolini, L.; Fornetti, G.; Francucci, M.; Guarneri, M.; Nuvoli, M.; Paglia, E.; Ricci, R.

    2008-03-01

    We present a new color (RGB) imaging 3D laser scanner prototype recently developed in ENEA, Italy). The sensor is based on AM range finding technique and uses three distinct beams (650nm, 532nm and 450nm respectively) in monostatic configuration. During a scan the laser beams are simultaneously swept over the target, yielding range and three separated channels (R, G and B) of reflectance information for each sampled point. This information, organized in range and reflectance images, is then elaborated to produce very high definition color pictures and faithful, natively colored 3D models. Notable characteristics of the system are the absence of shadows in the acquired reflectance images - due to the system's monostatic setup and intrinsic self-illumination capability - and high noise rejection, achieved by using a narrow field of view and interferential filters. The system is also very accurate in range determination (accuracy better than 10 -4) at distances up to several meters. These unprecedented features make the system particularly suited to applications in the domain of cultural heritage preservation, where it could be used by conservators for examining in detail the status of degradation of frescoed walls, monuments and paintings, even at several meters of distance and in hardly accessible locations. After providing some theoretical background, we describe the general architecture and operation modes of the color 3D laser scanner, by reporting and discussing first experimental results and comparing high-definition color images produced by the instrument with photographs of the same subjects taken with a Nikon D70 digital camera.

  18. Image Reconstruction for Hybrid True-Color Micro-CT

    PubMed Central

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2013-01-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806

  19. Animal Detection in Natural Images: Effects of Color and Image Database

    PubMed Central

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  20. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  1. Brain MR image segmentation using NAMS in pseudo-color.

    PubMed

    Li, Hua; Chen, Chuanbo; Fang, Shaohong; Zhao, Shengrong

    2017-12-01

    Image segmentation plays a crucial role in various biomedical applications. In general, the segmentation of brain Magnetic Resonance (MR) images is mainly used to represent the image with several homogeneous regions instead of pixels for surgical analyzing and planning. This paper proposes a new approach for segmenting MR brain images by using pseudo-color based segmentation with Non-symmetry and Anti-packing Model with Squares (NAMS). First of all, the NAMS model is presented. The model can represent the image with sub-patterns to keep the image content and largely reduce the data redundancy. Second, the key idea is proposed that convert the original gray-scale brain MR image into a pseudo-colored image and then segment the pseudo-colored image with NAMS model. The pseudo-colored image can enhance the color contrast in different tissues in brain MR images, which can improve the precision of segmentation as well as directly visual perceptional distinction. Experimental results indicate that compared with other brain MR image segmentation methods, the proposed NAMS based pseudo-color segmentation method performs more excellent in not only segmenting precisely but also saving storage.

  2. Performance analysis of multi-primary color display based on OLEDs/PLEDs

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Deng, Fei; Xu, Shan; Gao, Shufang

    2017-09-01

    A multi-primary color display, such as the six-primary color format, is a solution in expanding the color gamut of a full-color flat panel display. The performance of a multi-primary color display based on organic/polymer light-emitting diodes was analyzed in this study using the fitting curves of the characteristics of devices (i.e., current density, voltage, luminance). A white emitter was introduced into a six-primary color format to form a seven-primary color format that contributes to energy saving, and the ratio of power efficiency of a seven-primary color display to that of a six-primary color display would increase from 1.027 to 1.061 by using emitting diodes with different electroluminescent efficiencies. Different color matching schemes of the seven-primary color format display were compared in a uniform color space, and the scheme of the color reproduction did not significantly affect the display performance. Although seven- and six-primary color format displays benefit a full-color display with higher quality, they are less efficient than three-primary (i.e., red (R), green (G), and blue (B), RGB) and four-primary (i.e., RGB+white, RGBW) color format displays. For the seven-primary color formats considered in this study, the advantages of white-primary-added display with efficiently developed light-emitting devices were more evident than the format without a white primary.

  3. Hypercomplex Fourier transforms of color images.

    PubMed

    Ell, Todd A; Sangwine, Stephen J

    2007-01-01

    Fourier transforms are a fundamental tool in signal and image processing, yet, until recently, there was no definition of a Fourier transform applicable to color images in a holistic manner. In this paper, hypercomplex numbers, specifically quaternions, are used to define a Fourier transform applicable to color images. The properties of the transform are developed, and it is shown that the transform may be computed using two standard complex fast Fourier transforms. The resulting spectrum is explained in terms of familiar phase and modulus concepts, and a new concept of hypercomplex axis. A method for visualizing the spectrum using color graphics is also presented. Finally, a convolution operational formula in the spectral domain is discussed.

  4. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  5. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  6. a New Color Correction Method for Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.

    2015-04-01

    Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.

  7. Color preservation for tone reproduction and image enhancement

    NASA Astrophysics Data System (ADS)

    Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh

    2014-01-01

    Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.

  8. MultiFocus Polarization Microscope (MF-PolScope) for 3D polarization imaging of up to 25 focal planes simultaneously

    PubMed Central

    Abrahamsson, Sara; McQuilken, Molly; Mehta, Shalin B.; Verma, Amitabh; Larsch, Johannes; Ilic, Rob; Heintzmann, Rainer; Bargmann, Cornelia I.; Gladfelter, Amy S.; Oldenbourg, Rudolf

    2015-01-01

    We have developed an imaging system for 3D time-lapse polarization microscopy of living biological samples. Polarization imaging reveals the position, alignment and orientation of submicroscopic features in label-free as well as fluorescently labeled specimens. Optical anisotropies are calculated from a series of images where the sample is illuminated by light of different polarization states. Due to the number of images necessary to collect both multiple polarization states and multiple focal planes, 3D polarization imaging is most often prohibitively slow. Our MF-PolScope system employs multifocus optics to form an instantaneous 3D image of up to 25 simultaneous focal-planes. We describe this optical system and show examples of 3D multi-focus polarization imaging of biological samples, including a protein assembly study in budding yeast cells. PMID:25837112

  9. Quantifying the effect of colorization enhancement on mammogram images

    NASA Astrophysics Data System (ADS)

    Wojnicki, Paul J.; Uyeda, Elizabeth; Micheli-Tzanakou, Evangelia

    2002-04-01

    Current methods of radiological displays provide only grayscale images of mammograms. The limitation of the image space to grayscale provides only luminance differences and textures as cues for object recognition within the image. However, color can be an important and significant cue in the detection of shapes and objects. Increasing detection ability allows the radiologist to interpret the images in more detail, improving object recognition and diagnostic accuracy. Color detection experiments using our stimulus system, have demonstrated that an observer can only detect an average of 140 levels of grayscale. An optimally colorized image can allow a user to distinguish 250 - 1000 different levels, hence increasing potential image feature detection by 2-7 times. By implementing a colorization map, which follows the luminance map of the original grayscale images, the luminance profile is preserved and color is isolated as the enhancement mechanism. The effect of this enhancement mechanism on the shape, frequency composition and statistical characteristics of the Visual Evoked Potential (VEP) are analyzed and presented. Thus, the effectiveness of the image colorization is measured quantitatively using the Visual Evoked Potential (VEP).

  10. Lightness modification of color image for protanopia and deuteranopia

    NASA Astrophysics Data System (ADS)

    Tanaka, Go; Suetake, Noriaki; Uchino, Eiji

    2010-01-01

    In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.

  11. Multi-focus image fusion based on area-based standard deviation in dual tree contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin

    2018-04-01

    Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.

  12. Improving the image discontinuous problem by using color temperature mapping method

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming

    2011-09-01

    This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.

  13. Analyses of multi-color plant-growth light sources in achieving maximum photosynthesis efficiencies with enhanced color qualities.

    PubMed

    Wu, Tingzhu; Lin, Yue; Zheng, Lili; Guo, Ziquan; Xu, Jianxing; Liang, Shijie; Liu, Zhuguagn; Lu, Yijun; Shih, Tien-Mo; Chen, Zhong

    2018-02-19

    An optimal design of light-emitting diode (LED) lighting that benefits both the photosynthesis performance for plants and the visional health for human eyes has drawn considerable attention. In the present study, we have developed a multi-color driving algorithm that serves as a liaison between desired spectral power distributions and pulse-width-modulation duty cycles. With the aid of this algorithm, our multi-color plant-growth light sources can optimize correlated-color temperature (CCT) and color rendering index (CRI) such that photosynthetic luminous efficacy of radiation (PLER) is maximized regardless of the number of LEDs and the type of photosynthetic action spectrum (PAS). In order to illustrate the accuracies of the proposed algorithm and the practicalities of our plant-growth light sources, we choose six color LEDs and German PAS for experiments. Finally, our study can help provide a useful guide to improve light qualities in plant factories, in which long-term co-inhabitance of plants and human beings is required.

  14. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  15. Multi-color incomplete Cholesky conjugate gradient methods for vector computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poole, E.L.

    1986-01-01

    This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less

  16. Application of passive imaging polarimetry in the discrimination and detection of different color targets of identical shapes using color-blind imaging sensors

    NASA Astrophysics Data System (ADS)

    El-Saba, A. M.; Alam, M. S.; Surpanani, A.

    2006-05-01

    Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.

  17. Color Image Restoration Using Nonlocal Mumford-Shah Regularizers

    NASA Astrophysics Data System (ADS)

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.

    We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.

  18. Open-source do-it-yourself multi-color fluorescence smartphone microscopy

    PubMed Central

    Sung, Yulung; Campa, Fernando; Shih, Wei-Chuan

    2017-01-01

    Fluorescence microscopy is an important technique for cellular and microbiological investigations. Translating this technique onto a smartphone can enable particularly powerful applications such as on-site analysis, on-demand monitoring, and point-of-care diagnostics. Current fluorescence smartphone microscope setups require precise illumination and imaging alignment which altogether limit its broad adoption. We report a multi-color fluorescence smartphone microscope with a single contact lens-like add-on lens and slide-launched total-internal-reflection guided illumination for three common tasks in investigative fluorescence microscopy: autofluorescence, fluorescent stains, and immunofluorescence. The open-source, simple and cost-effective design has the potential for do-it-yourself fluorescence smartphone microscopy. PMID:29188104

  19. A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2015-07-01

    In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.

  20. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    PubMed

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  1. Hiding Information Using different lighting Color images

    NASA Astrophysics Data System (ADS)

    Majead, Ahlam; Awad, Rash; Salman, Salema S.

    2018-05-01

    The host medium for the secret message is one of the important principles for the designers of steganography method. In this study, the best color image was studied to carrying any secret image.The steganography approach based Lifting Wavelet Transform (LWT) and Least Significant Bits (LSBs) substitution. The proposed method offers lossless and unnoticeable changes in the contrast carrier color image and imperceptible by human visual system (HVS), especially the host images which was captured in dark lighting conditions. The aim of the study was to study the process of masking the data in colored images with different light intensities. The effect of the masking process was examined on the images that are classified by a minimum distance and the amount of noise and distortion in the image. The histogram and statistical characteristics of the cover image the results showed the efficient use of images taken with different light intensities in hiding data using the least important bit substitution method. This method succeeded in concealing textual data without distorting the original image (low light) Lire developments due to the concealment process.The digital image segmentation technique was used to distinguish small areas with masking. The result is that smooth homogeneous areas are less affected as a result of hiding comparing with high light areas. It is possible to use dark color images to send any secret message between two persons for the purpose of secret communication with good security.

  2. False-color composite image of Raco, Michigan

    NASA Image and Video Library

    1994-04-10

    STS059-S-027 (10 April 1994) --- This image is a false-color composite of Raco, Michigan, centered at 46.39 degrees north latitude, 84.88 degrees east longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Space Shuttle Endeavour on its 6th orbit and during the first full-capacity test of the instrument. This image was produced using both L-Band and C-Band data. The area shown is approximately 20 kilometers by 50 kilometers. Raco is located at the eastern end of Michigan's upper peninsula, west of Sault Ste. Marie and south of Whitefish Bay on Lake Superior. The site is located at the boundary between the boreal forests and the northern temperate forests, a transitional zone that is expected to be ecologically sensitive to anticipated global changes resulting from climatic warming. On any given day, there is a 60 percent chance that this area will be obscured to some extent by cloud cover which makes it difficult to image using optical sensors. In this color representation (Red=LHH, Green=LHV, Blue=CHH), darker areas in the image are smooth surfaces such as frozen lakes and other non-forested areas. The colors are related to the types of trees and the brightness is related to the amount of plant material covering the surface, called forest biomass. Accurate information about land-cover is important to area resource managers and for use in regional- to global-scale scientific models used to understand global change. SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-Band (24 cm), C-Band (6 cm), and X-Band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer

  3. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  4. Doppler color imaging. Principles and instrumentation.

    PubMed

    Kremkau, F W

    1992-01-01

    DCI acquires Doppler-shifted echoes from a cross-section of tissue scanned by an ultrasound beam. These echoes are then presented in color and superimposed on the gray-scale anatomic image of non-Doppler-shifted echoes received during the scan. The flow echoes are assigned colors according to the color map chosen. Usually red, yellow, or white indicates positive Doppler shifts (approaching flow) and blue, cyan, or white indicates negative shifts (receding flow). Green is added to indicate variance (disturbed or turbulent flow). Several pulses (the number is called the ensemble length) are needed to generate a color scan line. Linear, convex, phased, and annular arrays are used to acquire the gray-scale and color-flow information. Doppler color-flow instruments are pulsed-Doppler instruments and are subject to the same limitations, such as Doppler angle dependence and aliasing, as other Doppler instruments. Color controls include gain, TGC, map selection, variance on/off, persistence, ensemble length, color/gray priority. Nyquist limit (PRF), baseline shift, wall filter, and color window angle, location, and size. Doppler color-flow instruments generally have output intensities intermediate between those of gray-scale imaging and pulsed-Doppler duplex instruments. Although there is no known risk with the use of color-flow instruments, prudent practice dictates that they be used for medical indications and with the minimum exposure time and instrument output required to obtain the needed diagnostic information.

  5. Six-color intravital two-photon imaging of brain tumors and their dynamic microenvironment.

    PubMed

    Ricard, Clément; Debarbieux, Franck Christian

    2014-01-01

    The majority of intravital studies on brain tumor in living animal so far rely on dual color imaging. We describe here a multiphoton imaging protocol to dynamically characterize the interactions between six cellular components in a living mouse. We applied this methodology to a clinically relevant glioblastoma multiforme (GBM) model designed in reporter mice with targeted cell populations labeled by fluorescent proteins of different colors. This model permitted us to make non-invasive longitudinal and multi-scale observations of cell-to-cell interactions. We provide examples of such 5D (x,y,z,t,color) images acquired on a daily basis from volumes of interest, covering most of the mouse parietal cortex at subcellular resolution. Spectral deconvolution allowed us to accurately separate each cell population as well as some components of the extracellular matrix. The technique represents a powerful tool for investigating how tumor progression is influenced by the interactions of tumor cells with host cells and the extracellular matrix micro-environment. It will be especially valuable for evaluating neuro-oncological drug efficacy and target specificity. The imaging protocol provided here can be easily translated to other mouse models of neuropathologies, and should also be of fundamental interest for investigations in other areas of systems biology.

  6. Ocean color products from the Korean Geostationary Ocean Color Imager (GOCI).

    PubMed

    Wang, Menghua; Ahn, Jae-Hyun; Jiang, Lide; Shi, Wei; Son, SeungHyun; Park, Young-Je; Ryu, Joo-Hyung

    2013-02-11

    The first geostationary ocean color satellite sensor, Geostationary Ocean Color Imager (GOCI), which is onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS), was successfully launched in June of 2010. GOCI has a local area coverage of the western Pacific region centered at around 36°N and 130°E and covers ~2500 × 2500 km(2). GOCI has eight spectral bands from 412 to 865 nm with an hourly measurement during daytime from 9:00 to 16:00 local time, i.e., eight images per day. In a collaboration between NOAA Center for Satellite Applications and Research (STAR) and Korea Institute of Ocean Science and Technology (KIOST), we have been working on deriving and improving GOCI ocean color products, e.g., normalized water-leaving radiance spectra (nLw(λ)), chlorophyll-a concentration, diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)), etc. The GOCI-covered ocean region includes one of the world's most turbid and optically complex waters. To improve the GOCI-derived nLw(λ) spectra, a new atmospheric correction algorithm was developed and implemented in the GOCI ocean color data processing. The new algorithm was developed specifically for GOCI-like ocean color data processing for this highly turbid western Pacific region. In this paper, we show GOCI ocean color results from our collaboration effort. From in situ validation analyses, ocean color products derived from the new GOCI ocean color data processing have been significantly improved. Generally, the new GOCI ocean color products have a comparable data quality as those from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua. We show that GOCI-derived ocean color data can provide an effective tool to monitor ocean phenomenon in the region such as tide-induced re-suspension of sediments, diurnal variation of ocean optical and biogeochemical properties, and horizontal advection of river discharge. In particular, we show some examples of ocean

  7. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    PubMed

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  8. Research of image retrieval technology based on color feature

    NASA Astrophysics Data System (ADS)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram

  9. Radiation Hardening of Digital Color CMOS Camera-on-a-Chip Building Blocks for Multi-MGy Total Ionizing Dose Environments

    NASA Astrophysics Data System (ADS)

    Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé

    2017-01-01

    The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.

  10. A focus of attention mechanism for gaze control within a framework for intelligent image analysis tools

    NASA Astrophysics Data System (ADS)

    Rodrigo, Ranga P.; Ranaweera, Kamal; Samarabandu, Jagath K.

    2004-05-01

    Focus of attention is often attributed to biological vision system where the entire field of view is first monitored and then the attention is focused to the object of interest. We propose using a similar approach for object recognition in a color image sequence. The intention is to locate an object based on a prior motive, concentrate on the detected object so that the imaging device can be guided toward it. We use the abilities of the intelligent image analysis framework developed in our laboratory to generate an algorithm dynamically to detect the particular type of object based on the user's object description. The proposed method uses color clustering along with segmentation. The segmented image with labeled regions is used to calculate the shape descriptor parameters. These and the color information are matched with the input description. Gaze is then controlled by issuing camera movement commands as appropriate. We present some preliminary results that demonstrate the success of this approach.

  11. Single-fiber multi-color pyrometry

    DOEpatents

    Small, IV, Ward; Celliers, Peter

    2004-01-27

    This invention is a fiber-based multi-color pyrometry set-up for real-time non-contact temperature and emissivity measurement. The system includes a single optical fiber to collect radiation emitted by a target, a reflective rotating chopper to split the collected radiation into two or more paths while modulating the radiation for lock-in amplification (i.e., phase-sensitive detection), at least two detectors possibly of different spectral bandwidths with or without filters to limit the wavelength regions detected and optics to direct and focus the radiation onto the sensitive areas of the detectors. A computer algorithm is used to calculate the true temperature and emissivity of a target based on blackbody calibrations. The system components are enclosed in a light-tight housing, with provision for the fiber to extend outside to collect the radiation. Radiation emitted by the target is transmitted through the fiber to the reflective chopper, which either allows the radiation to pass straight through or reflects the radiation into one or more separate paths. Each path includes a detector with or without filters and corresponding optics to direct and focus the radiation onto the active area of the detector. The signals are recovered using lock-in amplification. Calibration formulas for the signals obtained using a blackbody of known temperature are used to compute the true temperature and emissivity of the target. The temperature range of the pyrometer system is determined by the spectral characteristics of the optical components.

  12. Single-fiber multi-color pyrometry

    DOEpatents

    Small, IV, Ward; Celliers, Peter

    2000-01-01

    This invention is a fiber-based multi-color pyrometry set-up for real-time non-contact temperature and emissivity measurement. The system includes a single optical fiber to collect radiation emitted by a target, a reflective rotating chopper to split the collected radiation into two or more paths while modulating the radiation for lock-in amplification (i.e., phase-sensitive detection), at least two detectors possibly of different spectral bandwidths with or without filters to limit the wavelength regions detected and optics to direct and focus the radiation onto the sensitive areas of the detectors. A computer algorithm is used to calculate the true temperature and emissivity of a target based on blackbody calibrations. The system components are enclosed in a light-tight housing, with provision for the fiber to extend outside to collect the radiation. Radiation emitted by the target is transmitted through the fiber to the reflective chopper, which either allows the radiation to pass straight through or reflects the radiation into one or more separate paths. Each path includes a detector with or without filters and corresponding optics to direct and focus the radiation onto the active area of the detector. The signals are recovered using lock-in amplification. Calibration formulas for the signals obtained using a blackbody of known temperature are used to compute the true temperature and emissivity of the target. The temperature range of the pyrometer system is determined by the spectral characteristics of the optical components.

  13. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  14. Combinatorial Color Space Models for Skin Detection in Sub-continental Human Images

    NASA Astrophysics Data System (ADS)

    Khaled, Shah Mostafa; Saiful Islam, Md.; Rabbani, Md. Golam; Tabassum, Mirza Rehenuma; Gias, Alim Ul; Kamal, Md. Mostafa; Muctadir, Hossain Muhammad; Shakir, Asif Khan; Imran, Asif; Islam, Saiful

    Among different color models HSV, HLS, YIQ, YCbCr, YUV, etc. have been most popular for skin detection. Most of the research done in the field of skin detection has been trained and tested on human images of African, Mongolian and Anglo-Saxon ethnic origins, skin colors of Indian sub-continentals have not been focused separately. Combinatorial algorithms, without affecting asymptotic complexity can be developed using the skin detection concepts of these color models for boosting detection performance. In this paper a comparative study of different combinatorial skin detection algorithms have been made. For training and testing 200 images (skin and non skin) containing pictures of sub-continental male and females have been used to measure the performance of the combinatorial approaches, and considerable development in success rate with True Positive of 99.5% and True Negative of 93.3% have been observed.

  15. Color image watermarking against fog effects

    NASA Astrophysics Data System (ADS)

    Chotikawanid, Piyanart; Amornraksa, Thumrongrat

    2017-07-01

    Fog effects in various computer and camera software can partially or fully damage the watermark information within the watermarked image. In this paper, we propose a color image watermarking based on the modification of reflectance component against fog effects. The reflectance component is extracted from the blue color channel in the RGB color space of a host image, and then used to carry a watermark signal. The watermark extraction is blindly achieved by subtracting the estimation of the original reflectance component from the watermarked component. The performance of the proposed watermarking method in terms of wPSNR and NC is evaluated, and then compared with the previous method. The experimental results on robustness against various levels of fog effect, from both computer software and mobile application, demonstrated a higher robustness of our proposed method, compared to the previous one.

  16. Two-color CW STED nanoscopy

    NASA Astrophysics Data System (ADS)

    Chen, Xuanze; Liu, Yujia; Yang, Xusan; Wang, Tingting; Alonas, Eric; Santangelo, Philip J.; Ren, Qiushi; Xi, Peng

    2013-02-01

    Fluorescent microscopy has become an essential tool to study biological molecules, pathways and events in living cells, tissues and animals. Meanwhile even the most advanced confocal microscopy can only yield optical resolution approaching Abbe diffraction limit of 200 nm. This is still larger than many subcellular structures, which are too small to be resolved in detail. These limitations have driven the development of super-resolution optical imaging methodologies over the past decade. In stimulated emission depletion (STED) microscopy, the excitation focus is overlapped by an intense doughnut-shaped spot to instantly de-excite markers from their fluorescent state to the ground state by stimulated emission. This effectively eliminates the periphery of the Point Spread Function (PSF), resulting in a narrower focal region, or super-resolution. Scanning a sharpened spot through the specimen renders images with sub-diffraction resolution. Multi-color STED imaging can present important structural and functional information for protein-protein interaction. In this work, we presented a two-color, synchronization-free STED microscopy with a Ti:Sapphire oscillator. The excitation wavelengths were 532nm and 635nm, respectively. With pump power of 4.6 W and sample irradiance of 310 mW, we achieved super-resolution as high as 71 nm. Human respiratory syncytial virus (hRSV) proteins were imaged with our two-color CW STED for co-localization analysis.

  17. Correction of clipped pixels in color images.

    PubMed

    Xu, Di; Doutre, Colin; Nasiopoulos, Panos

    2011-03-01

    Conventional images store a very limited dynamic range of brightness. The true luma in the bright area of such images is often lost due to clipping. When clipping changes the R, G, B color ratios of a pixel, color distortion also occurs. In this paper, we propose an algorithm to enhance both the luma and chroma of the clipped pixels. Our method is based on the strong chroma spatial correlation between clipped pixels and their surrounding unclipped area. After identifying the clipped areas in the image, we partition the clipped areas into regions with similar chroma, and estimate the chroma of each clipped region based on the chroma of its surrounding unclipped region. We correct the clipped R, G, or B color channels based on the estimated chroma and the unclipped color channel(s) of the current pixel. The last step involves smoothing of the boundaries between regions of different clipping scenarios. Both objective and subjective experimental results show that our algorithm is very effective in restoring the color of clipped pixels. © 2011 IEEE

  18. An imaging colorimeter for noncontact tissue color mapping.

    PubMed

    Balas, C

    1997-06-01

    There has been a considerable effort in several medical fields, for objective color analysis and characterization of biological tissues. Conventional colorimeters have proved inadequate for this purpose, since they do not provide spatial color information and because the measuring procedure randomly affects the color of the tissue. In this paper an imaging colorimeter is presented, where the nonimaging optical photodetector of colorimeters is replaced with the charge-coupled device (CCD) sensor of a color video camera, enabling the independent capturing of the color information for any spatial point within its field-of-view. Combining imaging and colorimetry methods, the acquired image is calibrated and corrected, under several ambient light conditions, providing noncontact reproducible color measurements and mapping, free of the errors and the limitations present in conventional colorimeters. This system was used for monitoring of blood supply changes of psoriatic plaques, that have undergone Psoralens and ultraviolet-A radiation (PUVA) therapy, where reproducible and reliable measurements were demonstrated. These features highlight the potential of the imaging colorimeters as clinical and research tools for the standardization of clinical diagnosis and for the objective evaluation of treatment effectiveness.

  19. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  20. Visual perception enhancement for detection of cancerous oral tissue by multi-spectral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping

    2013-05-01

    Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment.

  1. Retinal oxygen saturation evaluation by multi-spectral fundus imaging

    NASA Astrophysics Data System (ADS)

    Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

    2007-03-01

    Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work

  2. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  3. Multi-color CD34⁺ progenitor-focused flow cytometric assay in evaluation of myelodysplastic syndromes in patients with post cancer therapy cytopenia.

    PubMed

    Tang, Guilin; Jorgensen, L Jeffrey; Zhou, Yi; Hu, Ying; Kersh, Marian; Garcia-Manero, Guillermo; Medeiros, L Jeffrey; Wang, Sa A

    2012-08-01

    Bone marrow assessment for myelodysplastic syndrome (MDS) in a patient who develops cytopenia(s) following cancer therapy is challenging. With recent advances in multi-color flow cytometry immunophenotypic analysis, a CD34(+) progenitor-focused 7-color assay was developed and tested in this clinical setting. This assay was first performed in 73 MDS patients and 53 non-MDS patients (developmental set). A number of immunophenotypic changes were differentially observed in these two groups. Based on the sensitivity, specificity and reproducibility, a core panel of markers was selected for final assessment that included increased total CD34(+) myeloblasts; decreased stage I hematogones; altered CD45/side scatter; altered expression of CD13, CD33, CD34, CD38, CD117, and CD123; aberrant expression of lymphoid or mature myelomonocytic antigens on CD34(+) myeloblasts; and several marked alterations in maturing myelomonocytic cells. The data were translated into a simplified scoring system which was then used in 120 patients with cytopenia(s) secondary to cancer therapy over a 2-year period (validation set). With a median follow-up of 11 months, this assay demonstrated 89% sensitivity, 94% specificity, and 92% accuracy in establishing or excluding a diagnosis of MDS. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. False-color composite image of Raco, Michigan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This image is a false color composite of Raco, Michigan, centered at 46.39 north latitude and 84.88 east longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area shown is approximately 20 kilometers by 50 kilometers. Raco is located at the eastern end of Michigan's upper peninsula, west of Sault Ste. Marie and south of Whitefish Bay on Lake Superior. In this color representation, darker areas in the image are smooth surfaces such as frozen lakes and other non-forested areas. The colors are related to the types of trees and the brightness is related to the amount of plant material covering the surface, called forest biomass. The Jet Propulsion Laboratory alternative photo number is P-43882.

  5. Image Retrieval by Color Semantics with Incomplete Knowledge.

    ERIC Educational Resources Information Center

    Corridoni, Jacopo M.; Del Bimbo, Alberto; Vicario, Enrico

    1998-01-01

    Presents a system which supports image retrieval by high-level chromatic contents, the sensations that color accordances generate on the observer. Surveys Itten's theory of color semantics and discusses image description and query specification. Presents examples of visual querying. (AEF)

  6. Use of focus groups in multi-site, multi-ethnic research projects for women's health: a Study of Women Across the Nation (swan) example.

    PubMed

    Kagawa-Singer, Marjorie; Adler, Shelley R; Mouton, Charles E; Ory, Marcia; Underwood, Lynne G

    2009-01-01

    To outline the lessons learned about the use of focus groups for the multisite, multi-ethnic longitudinal Study of Women Across the Nation (SWAN). Focus groups were designed to identify potential cultural differences in the incidence of symptoms and the meaning of transmenopause among women of diverse cultures, and to identify effective recruitment and retention strategies. Inductive and deductive focus groups for a multi-ethnic study. Seven community research sites across the United States conducted focus groups with six ethnic populations: African American, Chinese American, Japanese American, Mexican American, non-Hispanic white, and Puerto Rican. Community women from each ethnic group of color. A set of four/five focus groups in each ethnic group as the formative stage of the deductive, quantitative SWAN survey. Identification of methodological advantages and challenges to the successful implementation of formative focus groups in a multi-ethnic, multi-site population-based epidemiologic study. We provide recommendations from our lessons learned to improve the use of focus groups in future studies with multi-ethnic populations. Mixed methods using inductive and deductive approaches require the scientific integrity of both research paradigms. Adequate resources and time must be budgeted as essential parts of the overall strategy from the outset of study. Inductive cross-cultural researchers should be key team members, beginning with inception through each subsequent design phase to increase the scientific validity, generalizability, and comparability of the results across diverse ethnic groups, to assure the relevance, validity and applicability of the findings to the multicultural population of focus.

  7. 3D shape recovery from image focus using Gabor features

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Mahmood, Jawad; Zeb, Ayesha; Iqbal, Javaid

    2018-04-01

    Recovering an accurate and precise depth map from a set of acquired 2-D image dataset of the target object each having different focus information is an ultimate goal of 3-D shape recovery. Focus measure algorithm plays an important role in this architecture as it converts the corresponding color value information into focus information which will be then utilized for recovering depth map. This article introduces Gabor features as focus measure approach for recovering depth map from a set of 2-D images. Frequency and orientation representation of Gabor filter features is similar to human visual system and normally applied for texture representation. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach, in spite of simplicity, generates accurate results.

  8. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  9. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  10. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-10

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  11. False-color L-band image of Manaus region of Brazil

    NASA Image and Video Library

    1994-04-13

    STS059-S-068 (13 April 1994) --- This false-color L-Band image of the Manaus region of Brazil was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Space Shuttle Endeavour on orbit 46 of the mission. The area shown is approximately 8 kilometers by 40 kilometers (5 by 25 miles). At the top of the image are the Solimoes and Rio Negro Rivers just before they combine at Manaus to form the Amazon River. The image is centered at about 3 degrees south latitude, and 61 degrees west longitude. The false colors are created by displaying three L-Band polarization channels; red areas correspond to high backscatter at HH polarization, while green areas exhibit high backscatter at HV polarization. Blue areas show low returns at VV polarization; hence the bright blue colors of the smooth river surfaces. Using this color scheme, green areas in the image are heavily forested, while blue areas are either cleared forest or open water. The yellow and red areas are flooded forest. Between Rio Solimoes and Rio Negro a road can be seen running from some cleared areas (visible as blue rectangles north of Rio Solimoes) north towards a tributary of Rio Negro. SIR-C/X-SAR is part of NASA's Mission to Planet Earth (MTPE). SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-Band (24 cm), C-Band (6 cm), and X-Band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory (JPL). X-SAR was developed by the Dornire and Alenia Spazio Companies

  12. Asymmetric color image encryption based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping

    2017-02-01

    A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.

  13. Color Sparse Representations for Image Processing: Review, Models, and Prospects.

    PubMed

    Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I

    2015-11-01

    Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.

  14. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  15. Optimal chroma-like channel design for passive color image splicing detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin

    2012-12-01

    Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.

  16. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification.

    PubMed

    Zhou, Tao; Li, Zhaofu; Pan, Jianjun

    2018-01-27

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.

  17. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification

    PubMed Central

    Pan, Jianjun

    2018-01-01

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively. PMID:29382073

  18. Medical color displays and their calibration

    NASA Astrophysics Data System (ADS)

    Fan, Jiahua; Roehrig, Hans; Dallas, W.; Krupinski, Elizabeth

    2009-08-01

    Color displays are increasingly used for medical imaging, replacing the traditional monochrome displays in radiology for multi-modality applications, 3D representation applications, etc. Color displays are also used increasingly because of wide spread application of Tele-Medicine, Tele-Dermatology and Digital Pathology. At this time, there is no concerted effort for calibration procedures for this diverse range of color displays in Telemedicine and in other areas of the medical field. Using a colorimeter to measure the display luminance and chrominance properties as well as some processing software we developed a first attempt to a color calibration protocol for the medical imaging field.

  19. Color image generation for screen-scanning holographic display.

    PubMed

    Takaki, Yasuhiro; Matsumoto, Yuji; Nakajima, Tatsumi

    2015-10-19

    Horizontally scanning holography using a microelectromechanical system spatial light modulator (MEMS-SLM) can provide reconstructed images with an enlarged screen size and an increased viewing zone angle. Herein, we propose techniques to enable color image generation for a screen-scanning display system employing a single MEMS-SLM. Higher-order diffraction components generated by the MEMS-SLM for R, G, and B laser lights were coupled by providing proper illumination angles on the MEMS-SLM for each color. An error diffusion technique to binarize the hologram patterns was developed, in which the error diffusion directions were determined for each color. Color reconstructed images with a screen size of 6.2 in. and a viewing zone angle of 10.2° were generated at a frame rate of 30 Hz.

  20. Minimized-Laplacian residual interpolation for color image demosaicking

    NASA Astrophysics Data System (ADS)

    Kiku, Daisuke; Monno, Yusuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A color difference interpolation technique is widely used for color image demosaicking. In this paper, we propose a minimized-laplacian residual interpolation (MLRI) as an alternative to the color difference interpolation, where the residuals are differences between observed and tentatively estimated pixel values. In the MLRI, we estimate the tentative pixel values by minimizing the Laplacian energies of the residuals. This residual image transfor- mation allows us to interpolate more easily than the standard color difference transformation. We incorporate the proposed MLRI into the gradient based threshold free (GBTF) algorithm, which is one of current state-of- the-art demosaicking algorithms. Experimental results demonstrate that our proposed demosaicking algorithm can outperform the state-of-the-art algorithms for the 30 images of the IMAX and the Kodak datasets.

  1. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †

    PubMed Central

    Kiku, Daisuke; Okutomi, Masatoshi

    2017-01-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407

  2. Color Retinal Image Enhancement Based on Luminosity and Contrast Adjustment.

    PubMed

    Zhou, Mei; Jin, Kai; Wang, Shaoze; Ye, Juan; Qian, Dahong

    2018-03-01

    Many common eye diseases and cardiovascular diseases can be diagnosed through retinal imaging. However, due to uneven illumination, image blurring, and low contrast, retinal images with poor quality are not useful for diagnosis, especially in automated image analyzing systems. Here, we propose a new image enhancement method to improve color retinal image luminosity and contrast. A luminance gain matrix, which is obtained by gamma correction of the value channel in the HSV (hue, saturation, and value) color space, is used to enhance the R, G, and B (red, green and blue) channels, respectively. Contrast is then enhanced in the luminosity channel of L * a * b * color space by CLAHE (contrast-limited adaptive histogram equalization). Image enhancement by the proposed method is compared to other methods by evaluating quality scores of the enhanced images. The performance of the method is mainly validated on a dataset of 961 poor-quality retinal images. Quality assessment (range 0-1) of image enhancement of this poor dataset indicated that our method improved color retinal image quality from an average of 0.0404 (standard deviation 0.0291) up to an average of 0.4565 (standard deviation 0.1000). The proposed method is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness. This method of color retinal image enhancement may be employed to assist ophthalmologists in more efficient screening of retinal diseases and in development of improved automated image analysis for clinical diagnosis.

  3. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  4. Preparing Colorful Astronomical Images and Illustrations

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.; Frattare, L. M.

    2001-12-01

    We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.

  5. Quantitative characterization of color Doppler images: reproducibility, accuracy, and limitations.

    PubMed

    Delorme, S; Weisser, G; Zuna, I; Fein, M; Lorenz, A; van Kaick, G

    1995-01-01

    A computer-based quantitative analysis for color Doppler images of complex vascular formations is presented. The red-green-blue-signal from an Acuson XP10 is frame-grabbed and digitized. By matching each image pixel with the color bar, color pixels are identified and assigned to the corresponding flow velocity (color value). Data analysis consists of delineation of a region of interest and calculation of the relative number of color pixels in this region (color pixel density) as well as the mean color value. The mean color value was compared to flow velocities in a flow phantom. The thyroid and carotid artery in a volunteer were repeatedly examined by a single examiner to assess intra-observer variability. The thyroids in five healthy controls were examined by three experienced physicians to assess the extent of inter-observer variability and observer bias. The correlation between the mean color value and flow velocity ranged from 0.94 to 0.96 for a range of velocities determined by pulse repetition frequency. The average deviation of the mean color value from the flow velocity was 22% to 41%, depending on the selected pulse repetition frequency (range of deviations, -46% to +66%). Flow velocity was underestimated with inadequately low pulse repetition frequency, or inadequately high reject threshold. An overestimation occurred with inadequately high pulse repetition frequency. The highest intra-observer variability was 22% (relative standard deviation) for the color pixel density, and 9.1% for the mean color value. The inter-observer variation was approximately 30% for the color pixel density, and 20% for the mean color value. In conclusion, computer assisted image analysis permits an objective description of color Doppler images. However, the user must be aware that image acquisition under in vivo conditions as well as physical and instrumental factors may considerably influence the results.

  6. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  7. A coded structured light system based on primary color stripe projection and monochrome imaging.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  8. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  9. Color Based Bags-of-Emotions

    NASA Astrophysics Data System (ADS)

    Solli, Martin; Lenz, Reiner

    In this paper we describe how to include high level semantic information, such as aesthetics and emotions, into Content Based Image Retrieval. We present a color-based emotion-related image descriptor that can be used for describing the emotional content of images. The color emotion metric used is derived from psychophysical experiments and based on three variables: activity, weight and heat. It was originally designed for single-colors, but recent research has shown that the same emotion estimates can be applied in the retrieval of multi-colored images. Here we describe a new approach, based on the assumption that perceived color emotions in images are mainly affected by homogenous regions, defined by the emotion metric, and transitions between regions. RGB coordinates are converted to emotion coordinates, and for each emotion channel, statistical measurements of gradient magnitudes within a stack of low-pass filtered images are used for finding interest points corresponding to homogeneous regions and transitions between regions. Emotion characteristics are derived for patches surrounding each interest point, and saved in a bag-of-emotions, that, for instance, can be used for retrieving images based on emotional content.

  10. A novel dual-color bifocal imaging system for single-molecule studies.

    PubMed

    Jiang, Chang; Kaul, Neha; Campbell, Jenna; Meyhofer, Edgar

    2017-05-01

    In this paper, we report the design and implementation of a dual-color bifocal imaging (DBI) system that is capable of acquiring two spectrally distinct, spatially registered images of objects located in either same or two distinct focal planes. We achieve this by separating an image into two channels with distinct chromatic properties and independently focusing both images onto a single CCD camera. The two channels in our device are registered with subpixel accuracy, and long-term stability of the registered images with nanometer-precision was accomplished by reducing the drift of the images to ∼5 nm. We demonstrate the capabilities of our DBI system by imaging biomolecules labeled with spectrally distinct dyes and micro- and nano-sized spheres located in different focal planes.

  11. Physical evaluation of color and monochrome medical displays using an imaging colorimeter

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2013-03-01

    This paper presents an approach to physical evaluation of color and monochrome medical grade displays using an imaging colorimeter. The purpose of this study was to examine the influence of medical display types, monochrome or color at the same maximum luminance settings, on diagnostic performance. The focus was on the measurements of physical characteristics including spatial resolution and noise performance, which we believed could affect the clinical performance. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between two EIZO displays.

  12. Pet fur color and texture classification

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Mukherjee, Debarghar; Lim, SukHwan; Tretter, Daniel

    2007-01-01

    Object segmentation is important in image analysis for imaging tasks such as image rendering and image retrieval. Pet owners have been known to be quite vocal about how important it is to render their pets perfectly. We present here an algorithm for pet (mammal) fur color classification and an algorithm for pet (animal) fur texture classification. Per fur color classification can be applied as a necessary condition for identifying the regions in an image that may contain pets much like the skin tone classification for human flesh detection. As a result of the evolution, fur coloration of all mammals is caused by a natural organic pigment called Melanin and Melanin has only very limited color ranges. We have conducted a statistical analysis and concluded that mammal fur colors can be only in levels of gray or in two colors after the proper color quantization. This pet fur color classification algorithm has been applied for peteye detection. We also present here an algorithm for animal fur texture classification using the recently developed multi-resolution directional sub-band Contourlet transform. The experimental results are very promising as these transforms can identify regions of an image that may contain fur of mammals, scale of reptiles and feather of birds, etc. Combining the color and texture classification, one can have a set of strong classifiers for identifying possible animals in an image.

  13. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  14. Low illumination color image enhancement based on improved Retinex

    NASA Astrophysics Data System (ADS)

    Liao, Shujing; Piao, Yan; Li, Bing

    2017-11-01

    Low illumination color image usually has the characteristics of low brightness, low contrast, detail blur and high salt and pepper noise, which greatly affected the later image recognition and information extraction. Therefore, in view of the degradation of night images, the improved algorithm of traditional Retinex. The specific approach is: First, the original RGB low illumination map is converted to the YUV color space (Y represents brightness, UV represents color), and the Y component is estimated by using the sampling acceleration guidance filter to estimate the background light; Then, the reflection component is calculated by the classical Retinex formula and the brightness enhancement ratio between original and enhanced is calculated. Finally, the color space conversion from YUV to RGB and the feedback enhancement of the UV color component are carried out.

  15. Finding text in color images

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga

    1998-04-01

    In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.

  16. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  17. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  18. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities.

  19. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  20. Color reproduction and processing algorithm based on real-time mapping for endoscopic images.

    PubMed

    Khan, Tareq H; Mohammed, Shahed K; Imtiaz, Mohammad S; Wahid, Khan A

    2016-01-01

    In this paper, we present a real-time preprocessing algorithm for image enhancement for endoscopic images. A novel dictionary based color mapping algorithm is used for reproducing the color information from a theme image. The theme image is selected from a nearby anatomical location. A database of color endoscopy image for different location is prepared for this purpose. The color map is dynamic as its contents change with the change of the theme image. This method is used on low contrast grayscale white light images and raw narrow band images to highlight the vascular and mucosa structures and to colorize the images. It can also be applied to enhance the tone of color images. The statistic visual representation and universal image quality measures show that the proposed method can highlight the mucosa structure compared to other methods. The color similarity has been verified using Delta E color difference, structure similarity index, mean structure similarity index and structure and hue similarity. The color enhancement was measured using color enhancement factor that shows considerable improvements. The proposed algorithm has low and linear time complexity, which results in higher execution speed than other related works.

  1. Evaluating color performance of whole-slide imaging devices by multispectral-imaging of biological tissues

    NASA Astrophysics Data System (ADS)

    Saleheen, Firdous; Badano, Aldo; Cheng, Wei-Chung

    2017-03-01

    The color reproducibility of two whole-slide imaging (WSI) devices was evaluated with biological tissue slides. Three tissue slides (human colon, skin, and kidney) were used to test a modern and a legacy WSI devices. The color truth of the tissue slides was obtained using a multispectral imaging system. The output WSI images were compared with the color truth to calculate the color difference for each pixel. A psychophysical experiment was also conducted to measure the perceptual color reproducibility (PCR) of the same slides with four subjects. The experiment results show that the mean color differences of the modern, legacy, and monochrome WSI devices are 10.94+/-4.19, 22.35+/-8.99, and 42.74+/-2.96 ▵E00, while their mean PCRs are 70.35+/-7.64%, 23.06+/-14.68%, and 0.91+/-1.01%, respectively.

  2. Improved opponent color local binary patterns: an effective local image descriptor for color texture classification

    NASA Astrophysics Data System (ADS)

    Bianconi, Francesco; Bello-Cerezo, Raquel; Napoletano, Paolo

    2018-01-01

    Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.

  3. Estimation of color modification in digital images by CFA pattern change.

    PubMed

    Choi, Chang-Hee; Lee, Hae-Yeoun; Lee, Heung-Kyu

    2013-03-10

    Extensive studies have been carried out for detecting image forgery such as copy-move, re-sampling, blurring, and contrast enhancement. Although color modification is a common forgery technique, there is no reported forensic method for detecting this type of manipulation. In this paper, we propose a novel algorithm for estimating color modification in images acquired from digital cameras when the images are modified. Most commercial digital cameras are equipped with a color filter array (CFA) for acquiring the color information of each pixel. As a result, the images acquired from such digital cameras include a trace from the CFA pattern. This pattern is composed of the basic red green blue (RGB) colors, and it is changed when color modification is carried out on the image. We designed an advanced intermediate value counting method for measuring the change in the CFA pattern and estimating the extent of color modification. The proposed method is verified experimentally by using 10,366 test images. The results confirmed the ability of the proposed method to estimate color modification with high accuracy. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  5. Multi-parametric monitoring and assessment of high-intensity focused ultrasound (HIFU) boiling by harmonic motion imaging for focused ultrasound (HMIFU): an ex vivo feasibility study

    NASA Astrophysics Data System (ADS)

    Hou, Gary Y.; Marquet, Fabrice; Wang, Shutao; Konofagou, Elisa E.

    2014-03-01

    Harmonic motion imaging for focused ultrasound (HMIFU) is a recently developed high-intensity focused ultrasound (HIFU) treatment monitoring method with feasibilities demonstrated in vitro and in vivo. Here, a multi-parametric study is performed to investigate both elastic and acoustics-independent viscoelastic tissue changes using the Harmonic Motion Imaging (HMI) displacement, axial compressive strain and change in relative phase shift during high energy HIFU treatment with tissue boiling. Forty three (n = 43) thermal lesions were formed in ex vivo canine liver specimens (n = 28). Two-dimensional (2D) transverse HMI displacement maps were also obtained before and after lesion formation. The same method was repeated in 10 s, 20 s and 30 s HIFU durations at three different acoustic powers of 8, 10, and 11 W, which were selected and verified as treatment parameters capable of inducing boiling using both thermocouple and passive cavitation detection (PCD) measurements. Although a steady decrease in the displacement, compressive strain, and relative change in the focal phase shift (Δϕ) were obtained in numerous cases, indicating an overall increase in relative stiffness, the study outcomes also showed that during boiling, a reverse lesion-to-background displacement contrast was detected, indicating potential change in tissue absorption, geometrical change and/or, mechanical gelatification or pulverization. Following treatment, corresponding 2D HMI displacement images of the thermal lesions also mapped consistent discrepancy in the lesion-to-background displacement contrast. Despite the expectedly chaotic changes in acoustic properties with boiling, the relative change in phase shift showed a consistent decrease, indicating its robustness to monitor biomechanical properties independent of the acoustic property changes throughout the HIFU treatment. In addition, the 2D HMI displacement images confirmed and indicated the increase in the thermal lesion size with

  6. Multi-parametric monitoring and assessment of High Intensity Focused Ultrasound (HIFU) boiling by Harmonic Motion Imaging for Focused Ultrasound (HMIFU): An ex vivo feasibility study

    PubMed Central

    Hou, Gary Y.; Marquet, Fabrice; Wang, Shutao; Konofagou, Elisa E.

    2014-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed high-intensity focused ultrasound (HIFU) treatment monitoring method with feasibilities demonstrated in vitro and in vivo. Here, a multi-parametric study is performed to investigate both elastic and acoustics-independent viscoelastic tissue changes using the Harmonic Motion Imaging (HMI) displacement, axial compressive strain and change in relative phase-shift during high energy HIFU treatment with tissue boiling. Forty three (n=43) thermal lesions were formed in ex vivo canine liver specimens (n=28). Two dimensional (2D) transverse HMI displacement maps were also obtained before and after lesion formation. The same method was repeated in 10-s, 20-s and 30-s HIFU durations at three different acoustic powers of 8, 10, and 11W, which were selected and verified as treatment parameters capable of inducing boiling using both thermocouple and Passive Cavitation Detection (PCD) measurements. Although a steady decrease in the displacement, compressive strain, and relative change in the focal phase shift (Δφ) were obtained in numerous cases, indicating an overall increase in relative stiffness, the study outcomes also showed that during boiling, a reverse lesion-to-background displacement contrast was detected, indicating potential change in tissue absorption, geometrical change and/or, mechanical gelatification or pulverization. Following treatment, corresponding 2D HMI displacement images of the thermal lesions also mapped consistent discrepancy in the lesion-to-background displacement contrast. Despite unpredictable changes in acoustic properties with boiling, the relative change in phase shift showed a consistent decrease, indicating its robustness to monitor biomechanical properties independent of the acoustic property change throughout the HIFU treatment. In addition, the 2D HMI displacement images confirmed and indicated the increase in the thermal lesion size with treatment duration

  7. Multi-parametric monitoring and assessment of high-intensity focused ultrasound (HIFU) boiling by harmonic motion imaging for focused ultrasound (HMIFU): an ex vivo feasibility study.

    PubMed

    Hou, Gary Y; Marquet, Fabrice; Wang, Shutao; Konofagou, Elisa E

    2014-03-07

    Harmonic motion imaging for focused ultrasound (HMIFU) is a recently developed high-intensity focused ultrasound (HIFU) treatment monitoring method with feasibilities demonstrated in vitro and in vivo. Here, a multi-parametric study is performed to investigate both elastic and acoustics-independent viscoelastic tissue changes using the Harmonic Motion Imaging (HMI) displacement, axial compressive strain and change in relative phase shift during high energy HIFU treatment with tissue boiling. Forty three (n = 43) thermal lesions were formed in ex vivo canine liver specimens (n = 28). Two-dimensional (2D) transverse HMI displacement maps were also obtained before and after lesion formation. The same method was repeated in 10 s, 20 s and 30 s HIFU durations at three different acoustic powers of 8, 10, and 11 W, which were selected and verified as treatment parameters capable of inducing boiling using both thermocouple and passive cavitation detection (PCD) measurements. Although a steady decrease in the displacement, compressive strain, and relative change in the focal phase shift (Δϕ) were obtained in numerous cases, indicating an overall increase in relative stiffness, the study outcomes also showed that during boiling, a reverse lesion-to-background displacement contrast was detected, indicating potential change in tissue absorption, geometrical change and/or, mechanical gelatification or pulverization. Following treatment, corresponding 2D HMI displacement images of the thermal lesions also mapped consistent discrepancy in the lesion-to-background displacement contrast. Despite the expectedly chaotic changes in acoustic properties with boiling, the relative change in phase shift showed a consistent decrease, indicating its robustness to monitor biomechanical properties independent of the acoustic property changes throughout the HIFU treatment. In addition, the 2D HMI displacement images confirmed and indicated the increase in the thermal lesion size with

  8. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  9. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  10. SUBARU near-infrared multi-color images of Class II Young Stellar Object, RNO91

    NASA Astrophysics Data System (ADS)

    Mayama, Satoshi; Tamura, Motohide; Hayashi, Masahiko

    RNO91 is class II source currently in a transition phase between a protostar and a main-sequence star. It is known as a source of complex molecular outflows. Previous studies suggested that RNO91 was associated with a reflection nebula, a CO outflow, shock-excited H[2] emission, and disk type structure. But geometry of RNO91, especially its inner region, is not well confirmed yet. High resolution imaging is needed to understand the nature of RNO91 and its interaction with outflow. Furthermore, RNO91 is an important candidate for studying YSOs in a transition phase. Thus, we conducted near-infrared imaging observations of RNO91 with the infrared camera CIAO mounted on the Subaru 8.2m Telescope. We present JHK band and optical images which resolve a complex asymmetrical circumstellar structure. We examined the color of RNO91 nebula and compare the geometry of the system suggested by our data with that already proposed on the basis of other studies. Our main results are as follows; 1. At J and optical, several bluer clumps are detected and they are aligned nearly perpendicular to the outflow axis. 2. The NIR images show significant halo emission detected within 2'' around the peak position while less halo emission is seen in the optical image. The nebula appears to become more circular and more diffuse with increasing wavelengths. The power-law dependence of radial surface brightness profile is shallower than that of normal stars, indicating that RNO91 is still optically thick objects. We suggest that the halo emission is the NIR light scattered by an optically thick disk or envelope surrounding the RNO91. 3. In the shorter wavelength images, the nebula appears to become more extended (2".3 long) to the southwest. This extended emission might trace a bottom of outflow emanating to southwest direction. 4. Color composite image of RNO91 reveals that the emission extending to the north and to the east through RNO91 is interpreted as a part of the cavity wall seen

  11. Exploring the use of memory colors for image enhancement

    NASA Astrophysics Data System (ADS)

    Xue, Su; Tan, Minghui; McNamara, Ann; Dorsey, Julie; Rushmeier, Holly

    2014-02-01

    Memory colors refer to those colors recalled in association with familiar objects. While some previous work introduces this concept to assist digital image enhancement, their basis, i.e., on-screen memory colors, are not appropriately investigated. In addition, the resulting adjustment methods developed are not evaluated from a perceptual view of point. In this paper, we first perform a context-free perceptual experiment to establish the overall distributions of screen memory colors for three pervasive objects. Then, we use a context-based experiment to locate the most representative memory colors; at the same time, we investigate the interactions of memory colors between different objects. Finally, we show a simple yet effective application using representative memory colors to enhance digital images. A user study is performed to evaluate the performance of our technique.

  12. Enhancement of low visibility aerial images using histogram truncation and an explicit Retinex representation for balancing contrast and color consistency

    NASA Astrophysics Data System (ADS)

    Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup

    2017-06-01

    This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.

  13. A subjective evaluation of high-chroma color with wide color-gamut display

    NASA Astrophysics Data System (ADS)

    Kishimoto, Junko; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2009-01-01

    Displays tends to expand its color gamut, such as multi-primary color display, Adobe RGB and so on. Therefore displays got possible to display high chroma colors. However sometimes, we feel unnatural some for the image which only expanded chroma. Appropriate gamut mapping method to expand color gamut is not proposed very much. We are attempting preferred expanded color reproduction on wide color gamut display utilizing high chroma colors effectively. As a first step, we have conducted an experiment to investigate the psychological effect of color schemes including highly saturated colors. We used the six-primary-color projector that we have developed for the presentation of test colors. The six-primary-color projector's gamut volume in CIELAB space is about 1.8 times larger than the normal RGB projector. We conducted a subjective evaluation experiment using the SD (Semantic Differential) technique to find the quantitative psychological effect of high chroma colors.

  14. Optimal monochromatic color combinations for fusion imaging of FDG-PET and diffusion-weighted MR images.

    PubMed

    Kamei, Ryotaro; Watanabe, Yuji; Sagiyama, Koji; Isoda, Takuro; Togao, Osamu; Honda, Hiroshi

    2018-05-23

    To investigate the optimal monochromatic color combination for fusion imaging of FDG-PET and diffusion-weighted MR images (DW) regarding lesion conspicuity of each image. Six linear monochromatic color-maps of red, blue, green, cyan, magenta, and yellow were assigned to each of the FDG-PET and DW images. Total perceptual color differences of the lesions were calculated based on the lightness and chromaticity measured with the photometer. Visual lesion conspicuity was also compared among the PET-only, DW-only and PET-DW-double positive portions with mean conspicuity scores. Statistical analysis was performed with a one-way analysis of variance and Spearman's rank correlation coefficient. Among all the 12 possible monochromatic color-map combinations, the 3 combinations of red/cyan, magenta/green, and red/green produced the highest conspicuity scores. Total color differences between PET-positive and double-positive portions correlated with conspicuity scores (ρ = 0.2933, p < 0.005). Lightness differences showed a significant negative correlation with conspicuity scores between the PET-only and DWI-only positive portions. Chromaticity differences showed a marginally significant correlation with conspicuity scores between DWI-positive and double-positive portions. Monochromatic color combinations can facilitate the visual evaluation of FDG-uptake and diffusivity as well as registration accuracy on the FDG-PET/DW fusion images, when red- and green-colored elements are assigned to FDG-PET and DW images, respectively.

  15. Multi-spectral confocal microendoscope for in-vivo imaging

    NASA Astrophysics Data System (ADS)

    Rouse, Andrew Robert

    The concept of in-vivo multi-spectral confocal microscopy is introduced. A slit-scanning multi-spectral confocal microendoscope (MCME) was built to demonstrate the technique. The MCME employs a flexible fiber-optic catheter coupled to a custom built slit-scan confocal microscope fitted with a custom built imaging spectrometer. The catheter consists of a fiber-optic imaging bundle linked to a miniature objective and focus assembly. The design and performance of the miniature objective and focus assembly are discussed. The 3mm diameter catheter may be used on its own or routed though the instrument channel of a commercial endoscope. The confocal nature of the system provides optical sectioning with 3mum lateral resolution and 30mum axial resolution. The prism based multi-spectral detection assembly is typically configured to collect 30 spectral samples over the visible chromatic range. The spectral sampling rate varies from 4nm/pixel at 490nm to 8nm/pixel at 660nm and the minimum resolvable wavelength difference varies from 7nm to 18nm over the same spectral range. Each of these characteristics are primarily dictated by the dispersive power of the prism. The MCME is designed to examine cellular structures during optical biopsy and to exploit the diagnostic information contained within the spectral domain. The primary applications for the system include diagnosis of disease in the gastro-intestinal tract and female reproductive system. Recent data from the grayscale imaging mode are presented. Preliminary multi-spectral results from phantoms, cell cultures, and excised human tissue are presented to demonstrate the potential of in-vivo multi-spectral imaging.

  16. Digital watermarking for color images in hue-saturation-value color space

    NASA Astrophysics Data System (ADS)

    Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.

    2014-05-01

    This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.

  17. Interpretation of the rainbow color scale for quantitative medical imaging: perceptually linear color calibration (CSDF) versus DICOM GSDF

    NASA Astrophysics Data System (ADS)

    Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom

    2017-03-01

    Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better

  18. Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.

    PubMed

    Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir

    2016-08-01

    Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.

  19. Reserch on Spatial and Temporal Distribution of Color Steel Building Based on Multi-Source High-Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Yang, S. W.; Ma, J. J.; Wang, J. M.

    2018-04-01

    As representative vulnerable regions of the city, dense distribution areas of temporary color steel building are a major target for control of fire risks, illegal buildings, environmental supervision, urbanization quality and enhancement for city's image. In the domestic and foreign literature, the related research mainly focuses on fire risks and violation monitoring. However, due to temporary color steel building's special characteristics, the corresponding research about temporal and spatial distribution, and influence on urban spatial form etc. has not been reported. Therefore, firstly, the paper research aim plans to extract information of large-scale color steel building from high-resolution images. Secondly, the color steel plate buildings were classified, and the spatial and temporal distribution and aggregation characteristics of small (temporary buildings) and large (factory building, warehouse, etc.) buildings were studied respectively. Thirdly, the coupling relationship between the spatial distribution of color steel plate and the spatial pattern of urban space was analysed. The results show that there is a good coupling relationship between the color steel plate building and the urban spatial form. Different types of color steel plate building represent the pattern of regional differentiation of urban space and the phased pattern of urban development.

  20. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  1. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2004-10-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjøvik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  2. Bio-inspired color image enhancement model

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2009-05-01

    Human being can perceive natural scenes very well under various illumination conditions. Partial reasons are due to the contrast enhancement of center/surround networks and opponent analysis on the human retina. In this paper, we propose an image enhancement model to simulate the color processes in the human retina. Specifically, there are two center/surround layers, bipolar/horizontal and ganglion/amacrine; and four color opponents, red (R), green (G), blue (B), and yellow (Y). The central cell (bipolar or ganglion) takes the surrounding information from one or several horizontal or amacrine cells; and bipolar and ganglion both have ON and OFF sub-types. For example, a +R/-G bipolar (red-center- ON/green-surround-OFF) will be excited if only the center is illuminated, or inhibited if only the surroundings (bipolars) are illuminated, or stay neutral if both center and surroundings are illuminated. Likewise, other two color opponents with ON-center/OFF-surround, +G/-R and +B/-Y, follow the same rules. The yellow (Y) channel can be obtained by averaging red and green channels. On the other hand, OFF-center/ON-surround bipolars (i.e., -R/+G and -G/+R, but no - B/+Y) are inhibited when the center is illuminated. An ON-bipolar (or OFF-bipolar) only transfers signals to an ONganglion (or OFF-ganglion), where amacrines provide surrounding information. Ganglion cells have strong spatiotemporal responses to moving objects. In our proposed enhancement model, the surrounding information is obtained using weighted average of neighborhood; excited or inhibited can be implemented with pixel intensity increase or decrease according to a linear or nonlinear response; and center/surround excitations are decided by comparing their intensities. A difference of Gaussian (DOG) model is used to simulate the ganglion differential response. Experimental results using natural scenery pictures proved that, the proposed image enhancement model by simulating the two-layer center

  3. Pixel-Wise-Inter/Intra-Channel Color and Luminance Uniformity Corrections for Multi-Channel Projection Displays

    DTIC Science & Technology

    2016-08-11

    Journal Article 3. DATES COVERED (From – To) Jan 2015 – Dec 2015 4. TITLE AND SUBTITLE PIXEL-WISE INTER/INTRA-CHANNEL COLOR & LUMINANCE UNIFORMITY...Conference Dayton, Ohio – 28-29 June 2016 14. ABSTRACT Inter- and intra-channel color and luminance are generally non-uniform in multi-channel...projection display systems. Several methods have been proposed to correct for both inter- and intra-channel color and luminance variation in multi-channel

  4. Align and conquer: moving toward plug-and-play color imaging

    NASA Astrophysics Data System (ADS)

    Lee, Ho J.

    1996-03-01

    The rapid evolution of the low-cost color printing and image capture markets has precipitated a huge increase in the use of color imagery by casual end users on desktop systems, as opposed to traditional professional color users working with specialized equipment. While the cost of color equipment and software has decreased dramatically, the underlying system-level problems associated with color reproduction have remained the same, and in many cases are more difficult to address in a casual environment than in a professional setting. The proliferation of color imaging technologies so far has resulted in a wide availability of component solutions which work together poorly. A similar situation in the desktop computing market has led to the various `Plug-and-Play' standards, which provide a degree of interoperability between a range of products on disparate computing platforms. This presentation will discuss some of the underlying issues and emerging trends in the desktop and consumer digital color imaging markets.

  5. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  6. Color image enhancement of medical images using alpha-rooting and zonal alpha-rooting methods on 2D QDFT

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.

    2017-03-01

    2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.

  7. Single-exposure color digital holography

    NASA Astrophysics Data System (ADS)

    Feng, Shaotong; Wang, Yanhui; Zhu, Zhuqing; Nie, Shouping

    2010-11-01

    In this paper, we report a method for color image reconstruction by recording only one single multi-wavelength hologram. In the recording process, three lasers of different wavelengths emitting in the red, green and blue regions are used for illuminating on the object and the object diffraction fields will arrive at the hologram plane simultaneously. Three reference beams with different spatial angles will interfere with the corresponding object diffraction fields on the hologram plane, respectively. Finally, a series of sub-holograms incoherently overlapped on the CCD to be recorded as a multi-wavelength hologram. Angular division multiplexing is employed to reference beams so that the spatial spectra of the multiple recordings will be separated in the Fourier plane. In the reconstruction process, the multi-wavelength hologram will be Fourier transformed into its Fourier plane, where the spatial spectra of different wavelengths are separated and can be easily extracted by employing frequency filtering. The extracted spectra are used to reconstruct the corresponding monochromatic complex amplitudes, which will be synthesized to reconstruct the color image. For singleexposure recording technique, it is convenient for applications on the real-time image processing fields. However, the quality of the reconstructed images is affected by speckle noise. How to improve the quality of the images needs for further research.

  8. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  9. Use of discrete chromatic space to tune the image tone in a color image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  10. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018

  11. Munsell color analysis of Landsat color-ratio-composite images of limonitic areas in southwest New Mexico

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1985-01-01

    The causes of color variations in the green areas on Landsat 4/5-4/6-6/7 (red-blue-green) color-ratio-composite (CRC) images, defined as limonitic areas, were investigated by analyzing the CRC images of the Lordsburg, New Mexico area. The red-blue-green additive color system was mathematically transformed into the cylindrical Munsell color coordinates (hue, saturation, and value), and selected areas were digitally analyzed for color variation. The obtained precise color characteristics were then correlated with properties of surface material. The amount of limonite (L) visible to the sensor was found to be the primary cause of the observed color differences. The visible L is, is turn, affected by the amount of L on the material's surface and by within-pixel mixing of limonitic and nonlimonitic materials. The secondary cause of variation was vegetation density, which shifted CRC hues towards yellow-green, decreased saturation, and increased value.

  12. Using color histogram normalization for recovering chromatic illumination-changed images.

    PubMed

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  13. Unsupervised color image segmentation using a lattice algebra clustering technique

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Ritter, Gerhard X.

    2011-08-01

    In this paper we introduce a lattice algebra clustering technique for segmenting digital images in the Red-Green- Blue (RGB) color space. The proposed technique is a two step procedure. Given an input color image, the first step determines the finite set of its extreme pixel vectors within the color cube by means of the scaled min-W and max-M lattice auto-associative memory matrices, including the minimum and maximum vector bounds. In the second step, maximal rectangular boxes enclosing each extreme color pixel are found using the Chebychev distance between color pixels; afterwards, clustering is performed by assigning each image pixel to its corresponding maximal box. The two steps in our proposed method are completely unsupervised or autonomous. Illustrative examples are provided to demonstrate the color segmentation results including a brief numerical comparison with two other non-maximal variations of the same clustering technique.

  14. Colorful Saturn, Getting Closer

    NASA Image and Video Library

    2004-06-03

    As Cassini coasts into the final month of its nearly seven-year trek, the serene majesty of its destination looms ahead. The spacecraft's cameras are functioning beautifully and continue to return stunning views from Cassini's position, 1.2 billion kilometers (750 million miles) from Earth and now 15.7 million kilometers (9.8 million miles) from Saturn. In this narrow angle camera image from May 21, 2004, the ringed planet displays subtle, multi-hued atmospheric bands, colored by yet undetermined compounds. Cassini mission scientists hope to determine the exact composition of this material. This image also offers a preview of the detailed survey Cassini will conduct on the planet's dazzling rings. Slight differences in color denote both differences in ring particle composition and light scattering properties. Images taken through blue, green and red filters were combined to create this natural color view. The image scale is 132 kilometers (82 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA06060

  15. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  17. True color blood flow imaging using a high-speed laser photography system

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi

    2012-10-01

    Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.

  18. Magnetic Nanoparticles for Multi-Imaging and Drug Delivery

    PubMed Central

    Lee, Jae-Hyun; Kim, Ji-wook; Cheon, Jinwoo

    2013-01-01

    Various bio-medical applications of magnetic nanoparticles have been explored during the past few decades. As tools that hold great potential for advancing biological sciences, magnetic nanoparticles have been used as platform materials for enhanced magnetic resonance imaging (MRI) agents, biological separation and magnetic drug delivery systems, and magnetic hyperthermia treatment. Furthermore, approaches that integrate various imaging and bioactive moieties have been used in the design of multi-modality systems, which possess synergistically enhanced properties such as better imaging resolution and sensitivity, molecular recognition capabilities, stimulus responsive drug delivery with on-demand control, and spatio-temporally controlled cell signal activation. Below, recent studies that focus on the design and synthesis of multi-mode magnetic nanoparticles will be briefly reviewed and their potential applications in the imaging and therapy areas will be also discussed. PMID:23579479

  19. Three frequency false-color image of Oberpfaffenhofen supersite in Germany

    NASA Image and Video Library

    1994-04-18

    STS059-S-080 (18 April 1994) --- This is a false-color three frequency image of the Oberpfaffenhofen supersite, an area just south-west of Munich in southern Germany. The colors show the different conditions that the three radars (X-Band, C-Band and L-Band) can see on the ground. The image covers a 27 by 36 kilometer area. The center of the site is 48.09 degrees north and 11.29 degrees east. The image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the Space Shuttle Endeavour on April 11, 1994. The dark area on the left is Lake Ammersee. The two smaller lakes are the Woerthsee and the Pilsensee. On the bottom is the tip of the Starnbergersee. The city of Munich is located just beyond the right of the image. The Oberpfaffenhofen supersite is the major test site for SIR-C/X-SAR calibration and scientific investigations concerning agriculture, forestry, hydrology and geology. This color composite image is a three frequency overlay. L-Band total power was assigned red, the C-Band total power is shown in green and the X-Band VV polarization appears blue. The colors on the image stress the differences between the L-Band, C-Band, X-Band images. If the three radar antennas were getting an equal response from objects on the ground, this image would appear in black and white. However, in this image, the blue areas corresponds to area for which the X-Band backscatter is relatively higher than the backscatter at L and C-Bands. This behavior is characteristic of grasslands, clear cuts and shorter vegetation. Similarly, the forested areas have a reddish tint (L-Band). The green areas seen near both the Ammersee and the Pilsensee lakes indicate marshy areas. The agricultural fields in the upper right hand corner appear mostly in blue and green (X-Band and C-Band). The white areas are mostly urban areas, while the smooth surfaces of the lakes appear very dark. SIR-C/X-SAR is part of NASA's Mission to Planet Earth (MTPE). SIR

  20. Availability of color calibration for consistent color display in medical images and optimization of reference brightness for clinical use

    NASA Astrophysics Data System (ADS)

    Iwai, Daiki; Suganami, Haruka; Hosoba, Minoru; Ohno, Kazuko; Emoto, Yutaka; Tabata, Yoshito; Matsui, Norihisa

    2013-03-01

    Color image consistency has not been accomplished yet except the Digital Imaging and Communication in Medicine (DICOM) Supplement 100 for implementing a color reproduction pipeline and device independent color spaces. Thus, most healthcare enterprises could not check monitor degradation routinely. To ensure color consistency in medical color imaging, monitor color calibration should be introduced. Using simple color calibration device . chromaticity of colors including typical color (Red, Green, Blue, Green and White) are measured as device independent profile connection space value called u'v' before and after calibration. In addition, clinical color images are displayed and visual differences are observed. In color calibration, monitor brightness level has to be set to quite lower value 80 cd/m2 according to sRGB standard. As Maximum brightness of most color monitors available currently for medical use have much higher brightness than 80 cd/m2, it is not seemed to be appropriate to use 80 cd/m2 level for calibration. Therefore, we propose that new brightness standard should be introduced while maintaining the color representation in clinical use. To evaluate effects of brightness to chromaticity experimentally, brightness level is changed in two monitors from 80 to 270cd/m2 and chromaticity value are compared with each brightness levels. As a result, there are no significant differences in chromaticity diagram when brightness levels are changed. In conclusion, chromaticity is close to theoretical value after color calibration. Moreover, chromaticity isn't moved when brightness is changed. The results indicate optimized reference brightness level for clinical use could be set at high brightness in current monitors .

  1. Seed viability detection using computerized false-color radiographic image enhancement

    NASA Technical Reports Server (NTRS)

    Vozzo, J. A.; Marko, Michael

    1994-01-01

    Seed radiographs are divided into density zones which are related to seed germination. The seeds which germinate have densities relating to false-color red. In turn, a seed sorter may be designed which rejects those seeds not having sufficient red to activate a gate along a moving belt containing the seed source. This results in separating only seeds with the preselected densities representing biological viability lending to germination. These selected seeds demand a higher market value. Actual false-coloring isn't required for a computer to distinguish the significant gray-zone range. This range can be predetermined and screened without the necessity of red imaging. Applying false-color enhancement is a means of emphasizing differences in densities of gray within any subject from photographic, radiographic, or video imaging. Within the 0-255 range of gray levels, colors can be assigned to any single level or group of gray levels. Densitometric values then become easily recognized colors which relate to the image density. Choosing a color to identify any given density allows separation by morphology or composition (form or function). Additionally, relative areas of each color are readily available for determining distribution of that density by comparison with other densities within the image.

  2. Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1986-01-01

    In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.

  3. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  4. A novel quantum steganography scheme for color images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande

    In quantum image steganography, embedding capacity and security are two important issues. This paper presents a novel quantum steganography scheme using color images as cover images. First, the secret information is divided into 3-bit segments, and then each 3-bit segment is embedded into the LSB of one color pixel in the cover image according to its own value and using Gray code mapping rules. Extraction is the inverse of embedding. We designed the quantum circuits that implement the embedding and extracting process. The simulation results on a classical computer show that the proposed scheme outperforms several other existing schemes in terms of embedding capacity and security.

  5. Digital methods of recording color television images on film tape

    NASA Astrophysics Data System (ADS)

    Krivitskaya, R. Y.; Semenov, V. M.

    1985-04-01

    Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.

  6. Modeling human faces with multi-image photogrammetry

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2002-03-01

    Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a

  7. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.

  8. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  9. Multi-focus beam shaping of high power multimode lasers

    NASA Astrophysics Data System (ADS)

    Laskin, Alexander; Volpp, Joerg; Laskin, Vadim; Ostrun, Aleksei

    2017-08-01

    Beam shaping of powerful multimode fiber lasers, fiber-coupled solid-state and diode lasers is of great importance for improvements of industrial laser applications. Welding, cladding with millimetre scale working spots benefit from "inverseGauss" intensity profiles; performance of thick metal sheet cutting, deep penetration welding can be enhanced when distributing the laser energy along the optical axis as more efficient usage of laser energy, higher edge quality and reduction of the heat affected zone can be achieved. Building of beam shaping optics for multimode lasers encounters physical limitations due to the low beam spatial coherence of multimode fiber-coupled lasers resulting in big Beam Parameter Products (BPP) or M² values. The laser radiation emerging from a multimode fiber presents a mixture of wavefronts. The fiber end can be considered as a light source which optical properties are intermediate between a Lambertian source and a single mode laser beam. Imaging of the fiber end, using a collimator and a focusing objective, is a robust and widely used beam delivery approach. Beam shaping solutions are suggested in form of optics combining fiber end imaging and geometrical separation of focused spots either perpendicular to or along the optical axis. Thus, energy of high power lasers is distributed among multiple foci. In order to provide reliable operation with multi-kW lasers and avoid damages the optics are designed as refractive elements with smooth optical surfaces. The paper presents descriptions of multi-focus optics as well as examples of intensity profile measurements of beam caustics and application results.

  10. Color constancy in natural scenes explained by global image statistics.

    PubMed

    Foster, David H; Amano, Kinjiro; Nascimento, Sérgio M C

    2006-01-01

    To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance.

  11. Color TV: total variation methods for restoration of vector-valued images.

    PubMed

    Blomgren, P; Chan, T F

    1998-01-01

    We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.

  12. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  13. Estimation of saturated pixel values in digital color imaging

    PubMed Central

    Zhang, Xuemei; Brainard, David H.

    2007-01-01

    Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065

  14. [Perceptual sharpness metric for visible and infrared color fusion images].

    PubMed

    Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan

    2012-12-01

    For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.

  15. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  16. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  17. A color-corrected strategy for information multiplexed Fourier ptychographic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Mingqun; Zhang, Yuzhen; Chen, Qian; Sun, Jiasong; Fan, Yao; Zuo, Chao

    2017-12-01

    Fourier ptychography (FP) is a novel computational imaging technique that provides both wide field of view (FoV) and high-resolution (HR) imaging capacity for biomedical imaging. Combined with information multiplexing technology, wavelength multiplexed (or color multiplexed) FP imaging can be implemented by lighting up R/G/B LED units simultaneously. Furthermore, a HR image can be recovered at each wavelength from the multiplexed dataset. This enhances the efficiency of data acquisition. However, since the same dataset of intensity measurement is used to recover the HR image at each wavelength, the mean value in each channel would converge to the same value. In this paper, a color correction strategy embedded in the multiplexing FP scheme is demonstrated, which is termed as color corrected wavelength multiplexed Fourier ptychography (CWMFP). Three images captured by turning on a LED array in R/G/B are required as priori knowledge to improve the accuracy of reconstruction in the recovery process. Using the reported technique, the redundancy requirement of information multiplexed FP is reduced. Moreover, the accuracy of reconstruction at each channel is improved with correct color reproduction of the specimen.

  18. Neutral- and Multi-Colored Semitransparent Perovskite Solar Cells.

    PubMed

    Lee, Kyu-Tae; Guo, L Jay; Park, Hui Joon

    2016-04-11

    In this review, we summarize recent works on perovskite solar cells with neutral- and multi-colored semitransparency for building-integrated photovoltaics and tandem solar cells. The perovskite solar cells exploiting microstructured arrays of perovskite "islands" and transparent electrodes-the latter of which include thin metallic films, metal nanowires, carbon nanotubes, graphenes, and transparent conductive oxides for achieving optical transparency-are investigated. Moreover, the perovskite solar cells with distinctive color generation, which are enabled by engineering the band gap of the perovskite light-harvesting semiconductors with chemical management and integrating with photonic nanostructures, including microcavity, are discussed. We conclude by providing future research directions toward further performance improvements of the semitransparent perovskite solar cells.

  19. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  20. Color structured light imaging of skin

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin; Reichenberg, Jason; Sacks, Michael; Tunnell, James W.

    2016-05-01

    We illustrate wide-field imaging of skin using a structured light (SL) approach that highlights the contrast from superficial tissue scattering. Setting the spatial frequency of the SL in a regime that limits the penetration depth effectively gates the image for photons that originate from the skin surface. Further, rendering the SL images in a color format provides an intuitive format for viewing skin pathologies. We demonstrate this approach in skin pathologies using a custom-built handheld SL imaging system.

  1. Computerized simulation of color appearance for anomalous trichromats using the multispectral image.

    PubMed

    Yaguchi, Hirohisa; Luo, Junyan; Kato, Miharu; Mizokami, Yoko

    2018-04-01

    Most color simulators for color deficiencies are based on the tristimulus values and are intended to simulate the appearance of an image for dichromats. Statistics show that there are more anomalous trichromats than dichromats. Furthermore, the spectral sensitivities of anomalous cones are different from those of normal cones. Clinically, the types of color defects are characterized through Rayleigh color matching, where the observer matches a spectral yellow to a mixture of spectral red and green. The midpoints of the red/green ratios deviate from a normal trichromat. This means that any simulation based on the tristimulus values defined by a normal trichromat cannot predict the color appearance of anomalous Rayleigh matches. We propose a computerized simulation of the color appearance for anomalous trichromats using multispectral images. First, we assume that anomalous trichromats possess a protanomalous (green shifted) or deuteranomalous (red shifted) pigment instead of a normal (L or M) one. Second, we assume that the luminance will be given by L+M, and red/green and yellow/blue opponent color stimulus values are defined through L-M and (L+M)-S, respectively. Third, equal-energy white will look white for all observers. The spectral sensitivities of the luminance and the two opponent color channels are multiplied by the spectral radiance of each pixel of a multispectral image to give the luminance and opponent color stimulus values of the entire image. In the next stage of color reproduction for normal observers, the luminance and two opponent color channels are transformed into XYZ tristimulus values and then transformed into sRGB to reproduce a final image for anomalous trichromats. The proposed simulation can be used to predict the Rayleigh color matches for anomalous trichromats. We also conducted experiments to evaluate the appearance of simulated images by color deficient observers and verified the reliability of the simulation.

  2. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  3. Single underwater image enhancement based on color cast removal and visibility restoration

    NASA Astrophysics Data System (ADS)

    Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian

    2016-05-01

    Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.

  4. Segmentation and Classification of Burn Color Images

    DTIC Science & Technology

    2001-10-25

    SEGMENTATION AND CLASSIFICATION OF BURN COLOR IMAGES Begoña Acha1, Carmen Serrano1, Laura Roa2 1Área de Teoría de la Señal y Comunicaciones ...2000, Las Vegas (USA), pp. 411-415. [21] G. Wyszecki and W.S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae (New

  5. Color constancy in natural scenes explained by global image statistics

    PubMed Central

    Foster, David H.; Amano, Kinjiro; Nascimento, Sérgio M. C.

    2007-01-01

    To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance. PMID:16961965

  6. Research on multi-source image fusion technology in haze environment

    NASA Astrophysics Data System (ADS)

    Ma, GuoDong; Piao, Yan; Li, Bing

    2017-11-01

    In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.

  7. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  8. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  9. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels.

    PubMed

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  10. Adaptive pseudo-color enhancement method of weld radiographic images based on HSI color space and self-transformation of pixels

    NASA Astrophysics Data System (ADS)

    Jiang, Hongquan; Zhao, Yalin; Gao, Jianmin; Gao, Zhiyong

    2017-06-01

    The radiographic testing (RT) image of a steam turbine manufacturing enterprise has the characteristics of low gray level, low contrast, and blurriness, which lead to a substandard image quality. Moreover, it is not conducive for human eyes to detect and evaluate defects. This study proposes an adaptive pseudo-color enhancement method for weld radiographic images based on the hue, saturation, and intensity (HSI) color space and the self-transformation of pixels to solve these problems. First, the pixel's self-transformation is performed to the pixel value of the original RT image. The function value after the pixel's self-transformation is assigned to the HSI components in the HSI color space. Thereafter, the average intensity of the enhanced image is adaptively adjusted to 0.5 according to the intensity of the original image. Moreover, the hue range and interval can be adjusted according to personal habits. Finally, the HSI components after the adaptive adjustment can be transformed to display in the red, green, and blue color space. Numerous weld radiographic images from a steam turbine manufacturing enterprise are used to validate the proposed method. The experimental results show that the proposed pseudo-color enhancement method can improve image definition and make the target and background areas distinct in weld radiographic images. The enhanced images will be more conducive for defect recognition. Moreover, the image enhanced using the proposed method conforms to the human eye visual properties, and the effectiveness of defect recognition and evaluation can be ensured.

  11. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  12. Three-dimensional color Doppler imaging of the carotid artery

    NASA Astrophysics Data System (ADS)

    Picot, Paul A.; Rickey, Daniel W.; Mitchell, Ross; Rankin, Richard N.; Fenster, Aaron

    1991-05-01

    Stroke is the third leading cause of death in the United States. It is caused by ischemic injury to the brain, usually resulting from emboli from atherosclerotic plaques. The carotid bifurcation in humans is prone to atherosclerotic disease and is a site where emboli may originate. Currently, carotid stenoses are evaluated by non-invasive duplex Doppler ultrasound, with preoperative verification by intra-arterial angiography. We have developed a system that uses a color Doppler ultrasound imaging system to acquire in-vivo 3-D color Doppler images of the human carotid artery, with the aim of increasing the diagnostic accuracy of ultrasound and decreasing the use of angiography for verification. A clinical TL Ultramark 9 color Doppler ultrasound system was modified by mounting the hand-held ultrasound scan head on a motor-driven translation stage. The stage allows planar ultrasound images to be acquired over 45 mm along the neck between the clavicle and the mandible. A 3- D image is acquired by digitizing, in synchrony with the cardiac cycle, successive color ultrasound video images as the scan head is stepped along the neck. A complete volume set of 64 frames, comprising some 15 megabytes of data, requires approximately 2 minutes to acquire. The volume image is reformatted and displayed on a Sun 4/360 workstation equipped with a TAAC-1 graphics accelerator. The 3-D image may be manipulated in real time to yield the best view of blood flow in the bifurcation.

  13. Design and fabrication of a multi-focusing artificial compound eyes with negative meniscus substrate

    NASA Astrophysics Data System (ADS)

    Luo, Jiasai; Guo, Yongcai; Wang, Xin; Fan, Fenglian

    2017-04-01

    Miniaturized artificial compound eyes with a large field of view (FOV) have potential application in the area of micro-optical-electro-mechanical-system (MOEMS). A new non-uniform microlens array (MLA) on a negative meniscus substrate, fabricated by the melting photoresist method, was proposed in this paper. The multi-focusing MLA reduced the defocus effectively, which was caused by the uniform array on a spherical substrate. Moreover, like most ommatidia in compound eyes, each microlens of the multi-focusing MLA was arranged in one of the eleven concentric circles. In order to match with the multi-focusing MLA and avoid the total reflection, the negative meniscus substrate was fabricated by a homebuilt mold with a micro-hole array and polydimethylsiloxane coelomic compartment attached. The coelomic compartment is capable of offering an excellent injection environment without bubbles and impurities. Due to the direct 3D implementation of the MLA, rich available materials can be used by this method without substrate reshaping. As the molding material, the ultraviolet curing adhesive NOA81 can be cured within ten few seconds under ultraviolet which relieve intensive labor and protect the stereolithography apparatus effectively. The experimental results show that this new MLA has a better imaging performance, higher light usage efficiency and larger FOV because of the negative meniscus and multi-focusing MLA. Moreover, due to the homebuilt mold, more accurate geometrical parameters and shorter processing cycle were realized. Accordingly, together with an appropriate hardware, this MLA has diverse potential applications in medical imaging, military and machine vision.

  14. Bringing color to emotion: The influence of color on attentional bias to briefly presented emotional images.

    PubMed

    Bekhtereva, Valeria; Müller, Matthias M

    2017-10-01

    Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.

  15. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  16. Implementation of total focusing method for phased array ultrasonic imaging on FPGA

    NASA Astrophysics Data System (ADS)

    Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2015-02-01

    This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.

  17. Physics-based approach to color image enhancement in poor visibility conditions.

    PubMed

    Tan, K K; Oakley, J P

    2001-10-01

    Degradation of images by the atmosphere is a familiar problem. For example, when terrain is imaged from a forward-looking airborne camera, the atmosphere degradation causes a loss in both contrast and color information. Enhancement of such images is a difficult task because of the complexity in restoring both the luminance and the chrominance while maintaining good color fidelity. One particular problem is the fact that the level of contrast loss depends strongly on wavelength. A novel method is presented for the enhancement of color images. This method is based on the underlying physics of the degradation process, and the parameters required for enhancement are estimated from the image itself.

  18. Image enhancement and color constancy for a vehicle-mounted change detection system

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David

    2016-10-01

    Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.

  19. Edge enhancement of color images using a digital micromirror device.

    PubMed

    Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A

    2012-06-01

    A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.

  20. Clinically compatible flexible wide-field multi-color fluorescence endoscopy with a porcine colon model

    PubMed Central

    Oh, Gyugnseok; Park, Youngrong; Yoo, Su Woong; Hwang, Soonjoo; Chin-Yu, Alexey V. Dan; Ryu, Yeon-Mi; Kim, Sang-Yeob; Do, Eun-Ju; Kim, Ki Hean; Kim, Sungjee; Myung, Seung-Jae; Chung, Euiheon

    2017-01-01

    Early detection of structural or molecular changes in dysplastic epithelial tissues is crucial for cancer screening and surveillance. Multi-targeting molecular endoscopic fluorescence imaging may improve noninvasive detection of precancerous lesions in the colon. Here, we report the first clinically compatible, wide-field-of-view, multi-color fluorescence endoscopy with a leached fiber bundle scope using a porcine model. A porcine colon model that resembles the human colon is used for the detection of surrogate tumors composed of multiple biocompatible fluorophores (FITC, ICG, and heavy metal-free quantum dots (hfQDs)). With an ex vivo porcine colon tumor model, molecular imaging with hfQDs conjugated with MMP14 antibody was achieved by spraying molecular probes on a mucosa layer that contains xenograft tumors. With an in vivo porcine colon embedded with surrogate tumors, target-to-background ratios of 3.36 ± 0.43, 2.70 ± 0.72, and 2.10 ± 0.13 were achieved for FITC, ICG, and hfQD probes, respectively. This promising endoscopic technology with molecular contrast shows the capacity to reveal hidden tumors and guide treatment strategy decisions. PMID:28270983

  1. High resolution reversible color images on photonic crystal substrates.

    PubMed

    Kang, Pilgyu; Ogunbo, Samuel O; Erickson, David

    2011-08-16

    When light is incident on a crystalline structure with appropriate periodicity, some colors will be preferentially reflected (Joannopoulos, J. D.; Meade, R. D.; Winn, J. N. Photonic crystals: molding the flow of light; Princeton University Press: Princeton, NJ, 1995; p ix, 137 pp). These photonic crystals and the structural color they generate represent an interesting method for creating reflective displays and drawing devices, since they can achieve a continuous color response and do not require back lighting (Joannopoulos, J. D.; Villeneuve, P. R.; Fan, S. H. Photonic crystals: Putting a new twist on light. Nature 1997, 386, 143-149; Graham-Rowe, D. Tunable structural colour. Nat. Photonics 2009, 3, 551-553.; Arsenault, A. C.; Puzzo, D. P.; Manners, I.; Ozin, G. A. Photonic-crystal full-colour displays. Nat. Photonics 2007, 1, 468-472; Walish, J. J.; Kang, Y.; Mickiewicz, R. A.; Thomas, E. L. Bioinspired Electrochemically Tunable Block Copolymer Full Color Pixels. Adv. Mater.2009, 21, 3078). Here we demonstrate a technique for creating erasable, high-resolution, color images using otherwise transparent inks on self-assembled photonic crystal substrates (Fudouzi, H.; Xia, Y. N. Colloidal crystals with tunable colors and their use as photonic papers. Langmuir 2003, 19, 9653-9660). Using inkjet printing, we show the ability to infuse fine droplets of silicone oils into the crystal, locally swelling it and changing the reflected color (Sirringhaus, H.; Kawase, T.; Friend, R. H.; Shimoda, T.; Inbasekaran, M.; Wu, W.; Woo, E. P. High-resolution inkjet printing of all-polymer transistor circuits. Science 2000, 290, 2123-2126). Multicolor images with resolutions as high as 200 μm are obtained from oils of different molecular weights with the lighter oils being able to penetrate deeper, yielding larger red shifts. Erasing of images is done simply by adding a low vapor pressure oil which dissolves the image, returning the substrate to its original state.

  2. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  3. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less

  4. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  5. Detecting spatial defects in colored patterns using self-oscillating gels

    NASA Astrophysics Data System (ADS)

    Fang, Yan; Yashin, Victor V.; Dickerson, Samuel J.; Balazs, Anna C.

    2018-06-01

    With the growing demand for wearable computers, there is a need for material systems that can perform computational tasks without relying on external electrical power. Using theory and simulation, we design a material system that "computes" by integrating the inherent behavior of self-oscillating gels undergoing the Belousov-Zhabotinsky (BZ) reaction and piezoelectric (PZ) plates. These "BZ-PZ" units are connected electrically to form a coupled oscillator network, which displays specific modes of synchronization. We exploit this attribute in employing multiple BZ-PZ networks to perform pattern matching on complex multi-dimensional data, such as colored images. By decomposing a colored image into sets of binary vectors, we use each BZ-PZ network, or "channel," to store distinct information about the color and the shape of the image and perform the pattern matching operation. Our simulation results indicate that the multi-channel BZ-PZ device can detect subtle differences between the input and stored patterns, such as the color variation of one pixel or a small change in the shape of an object. To demonstrate a practical application, we utilize our system to process a colored Quick Response code and show its potential in cryptography and steganography.

  6. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  7. Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.

    PubMed

    Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels

    2013-01-01

    Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.

  8. Multi-dimensional effects of color on the world wide web

    NASA Astrophysics Data System (ADS)

    Morton, Jill

    2002-06-01

    Color is the most powerful building material of visual imagery on the World Wide Web. It must function successfully as it has done historically in traditional two-dimensional media, as well as address new challenges presented by this electronic medium. The psychological, physiological, technical and aesthetic effects of color have been redefined by the unique requirements of the electronic transmission of text and images on the Web. Color simultaneously addresses each of these dimensions in this electronic medium.

  9. Calibration View of Earth and the Moon by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The Earth and Moon images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to the Moon was about 1,440,000 kilometers (about 895,000 miles); the range to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This view combines a sequence of frames showing the passage of Earth and the Moon across the field of view of a single color band of the Mars Color Imager. As the spacecraft slewed to view the two objects, they passed through the camera's field of view. Earth has been saturated white in this image so that both Earth

  10. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    PubMed

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  11. Color Image of Phoenix Lander on Mars Surface

    NASA Image and Video Library

    2008-05-27

    This is an enhanced-color image from Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment HiRISE camera. It shows the NASA Mars Phoenix lander with its solar panels deployed on the Mars surface

  12. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  13. An instructional guide for leaf color analysis using digital imaging software

    Treesearch

    Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg

    2005-01-01

    Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...

  14. Accessible and informative sectioned images, color-coded images, and surface models of the ear.

    PubMed

    Park, Hyo Seok; Chung, Min Suk; Shin, Dong Sun; Jung, Yong Wook; Park, Jin Seo

    2013-08-01

    In our previous research, we created state-of-the-art sectioned images, color-coded images, and surface models of the human ear. Our ear data would be more beneficial and informative if they were more easily accessible. Therefore, the purpose of this study was to distribute the browsing software and the PDF file in which ear images are to be readily obtainable and freely explored. Another goal was to inform other researchers of our methods for establishing the browsing software and the PDF file. To achieve this, sectioned images and color-coded images of ear were prepared (voxel size 0.1 mm). In the color-coded images, structures related to hearing, equilibrium, and structures originated from the first and second pharyngeal arches were segmented supplementarily. The sectioned and color-coded images of right ear were added to the browsing software, which displayed the images serially along with structure names. The surface models were reconstructed to be combined into the PDF file where they could be freely manipulated. Using the browsing software and PDF file, sectional and three-dimensional shapes of ear structures could be comprehended in detail. Furthermore, using the PDF file, clinical knowledge could be identified through virtual otoscopy. Therefore, the presented educational tools will be helpful to medical students and otologists by improving their knowledge of ear anatomy. The browsing software and PDF file can be downloaded without charge and registration at our homepage (http://anatomy.dongguk.ac.kr/ear/). Copyright © 2013 Wiley Periodicals, Inc.

  15. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  16. Accurate color images: from expensive luxury to essential resource

    NASA Astrophysics Data System (ADS)

    Saunders, David R.; Cupitt, John

    2002-06-01

    Over ten years ago the National Gallery in London began a program to make digital images of paintings in the collection using a colorimetric imaging system. This was to provide a permanent record of the state of paintings against which future images could be compared to determine if any changes had occurred. It quickly became apparent that such images could be used not only for scientific purposes, but also in applications where transparencies were then being used, for example as source materials for printed books and catalogues or for computer-based information systems. During the 1990s we were involved in the development of a series of digital cameras that have combined the high color accuracy of the original 'scientific' imaging system with the familiarity and portability of a medium format camera. This has culminated in the program of digitization now in progress at the National Gallery. By the middle of 2001 we will have digitized all the major paintings in the collection at a resolution of 10,000 pixels along their longest dimension and with calibrated color; we are on target to digitize the whole collection by the end of 2002. The images are available on-line within the museum for consultation and so that Gallery departments can use the images in printed publications and on the Gallery's web- site. We describe the development of the imaging systems used at National Gallery and how the research we have conducted into high-resolution accurate color imaging has developed from being a peripheral, if harmless, research activity to becoming a central part of the Gallery's information and publication strategy. Finally, we discuss some outstanding issues, such as interfacing our color management procedures with the systems used by external organizations.

  17. Pseudo-color coding method for high-dynamic single-polarization SAR images

    NASA Astrophysics Data System (ADS)

    Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi

    2018-04-01

    A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.

  18. Objective Quality Assessment for Color-to-Gray Image Conversion.

    PubMed

    Ma, Kede; Zhao, Tiesong; Zeng, Kai; Wang, Zhou

    2015-12-01

    Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

  19. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  20. Color-coded visualization of magnetic resonance imaging multiparametric maps

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.

  1. The Aesthetics of Astrophysics: How to Make Appealing Color-composite Images that Convey the Science

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; Arcand, Kimberly K.; Watzke, Megan

    2017-05-01

    Astronomy has a rich tradition of using color photography and imaging, for visualization in research as well as for sharing scientific discoveries in formal and informal education settings (i.e., for “public outreach”). In the modern era, astronomical research has benefitted tremendously from electronic cameras that allow data and images to be generated and analyzed in a purely digital form with a level of precision that previously was not possible. Advances in image-processing software have also enabled color-composite images to be made in ways that are much more complex than with darkroom techniques, not only at optical wavelengths but across the electromagnetic spectrum. The Internet has made it possible to rapidly disseminate these images to eager audiences. Alongside these technological advances, there have been gains in understanding how to make images that are scientifically illustrative as well as aesthetically pleasing. Studies have also given insights on how the public interprets astronomical images and how that can be different than professional astronomers. An understanding of these differences will help in the creation of images that are meaningful to both groups. In this invited review, we discuss the techniques behind making color-composite images as well as examine the factors one should consider when doing so, whether for data visualization or public consumption. We also provide a brief history of astronomical imaging with a focus on the origins of the "modern era" during which distribution of high-quality astronomical images to the public is a part of nearly every professional observatory's public outreach. We review relevant research into the expectations and misconceptions that often affect the public's interpretation of these images.

  2. Software defined multi-spectral imaging for Arctic sensor networks

    NASA Astrophysics Data System (ADS)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop

  3. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  4. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  5. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  6. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2017-03-01

    Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.

  7. A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping

    NASA Astrophysics Data System (ADS)

    Saad, Ashraf A.; Shapiro, Linda G.

    2008-03-01

    Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.

  8. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  9. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  10. Automatic color preference correction for color reproduction

    NASA Astrophysics Data System (ADS)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  11. Mississippi Delta, Radar Image with Colored Height

    NASA Image and Video Library

    2005-08-29

    The geography of the New Orleans and Mississippi delta region is well shown in this radar image from the Shuttle Radar Topography Mission. In this image, bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations. New Orleans is situated along the southern shore of Lake Pontchartrain, the large, roughly circular lake near the center of the image. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest over water highway bridge. Major portions of the city of New Orleans are below sea level, and although it is protected by levees and sea walls, flooding during storm surges associated with major hurricanes is a significant concern. http://photojournal.jpl.nasa.gov/catalog/PIA04175

  12. Full color natural light holographic camera.

    PubMed

    Kim, Myung K

    2013-04-22

    Full-color, three-dimensional images of objects under incoherent illumination are obtained by a digital holography technique. Based on self-interference of two beam-split copies of the object's optical field with differential curvatures, the apparatus consists of a beam-splitter, a few mirrors and lenses, a piezo-actuator, and a color camera. No lasers or other special illuminations are used for recording or reconstruction. Color holographic images of daylight-illuminated outdoor scenes and a halogen lamp-illuminated toy figure are obtained. From a recorded hologram, images can be calculated, or numerically focused, at any distances for viewing.

  13. Just Noticeable Distortion Model and Its Application in Color Image Watermarking

    NASA Astrophysics Data System (ADS)

    Liu, Kuo-Cheng

    In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.

  14. CMEIAS color segmentation: an improved computing technology to process color images for quantitative microbial ecology studies at single-cell resolution.

    PubMed

    Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B

    2010-02-01

    Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most

  15. An Improved Filtering Method for Quantum Color Image in Frequency Domain

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Xiao, Hong

    2018-01-01

    In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.

  16. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    NASA Astrophysics Data System (ADS)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  17. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  18. Intraoperative imaging-guided cancer surgery: from current fluorescence molecular imaging methods to future multi-modality imaging technology.

    PubMed

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery.

  19. Intraoperative Imaging-Guided Cancer Surgery: From Current Fluorescence Molecular Imaging Methods to Future Multi-Modality Imaging Technology

    PubMed Central

    Chi, Chongwei; Du, Yang; Ye, Jinzuo; Kou, Deqiang; Qiu, Jingdan; Wang, Jiandong; Tian, Jie; Chen, Xiaoyuan

    2014-01-01

    Cancer is a major threat to human health. Diagnosis and treatment using precision medicine is expected to be an effective method for preventing the initiation and progression of cancer. Although anatomical and functional imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role for accurate preoperative diagnostics, for the most part these techniques cannot be applied intraoperatively. Optical molecular imaging is a promising technique that provides a high degree of sensitivity and specificity in tumor margin detection. Furthermore, existing clinical applications have proven that optical molecular imaging is a powerful intraoperative tool for guiding surgeons performing precision procedures, thus enabling radical resection and improved survival rates. However, detection depth limitation exists in optical molecular imaging methods and further breakthroughs from optical to multi-modality intraoperative imaging methods are needed to develop more extensive and comprehensive intraoperative applications. Here, we review the current intraoperative optical molecular imaging technologies, focusing on contrast agents and surgical navigation systems, and then discuss the future prospects of multi-modality imaging technology for intraoperative imaging-guided cancer surgery. PMID:25250092

  20. The application analysis of the multi-angle polarization technique for ocean color remote sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Yongchao; Zhu, Jun; Yin, Huan; Zhang, Keli

    2017-02-01

    The multi-angle polarization technique, which uses the intensity of polarized radiation as the observed quantity, is a new remote sensing means for earth observation. With this method, not only can the multi-angle light intensity data be provided, but also the multi-angle information of polarized radiation can be obtained. So, the technique may solve the problems, those could not be solved with the traditional remote sensing methods. Nowadays, the multi-angle polarization technique has become one of the hot topics in the field of the international quantitative research on remote sensing. In this paper, we firstly introduce the principles of the multi-angle polarization technique, then the situations of basic research and engineering applications are particularly summarized and analysed in 1) the peeled-off method of sun glitter based on polarization, 2) the ocean color remote sensing based on polarization, 3) oil spill detection using polarization technique, 4) the ocean aerosol monitoring based on polarization. Finally, based on the previous work, we briefly present the problems and prospects of the multi-angle polarization technique used in China's ocean color remote sensing.

  1. Dual-color dual-focus line-scanning FCS for quantitative analysis of receptor-ligand interactions in living specimens.

    PubMed

    Dörlich, René M; Chen, Qing; Niklas Hedde, Per; Schuster, Vittoria; Hippler, Marc; Wesslowski, Janine; Davidson, Gary; Nienhaus, G Ulrich

    2015-05-07

    Cellular communication in multi-cellular organisms is mediated to a large extent by a multitude of cell-surface receptors that bind specific ligands. An in-depth understanding of cell signaling networks requires quantitative information on ligand-receptor interactions within living systems. In principle, fluorescence correlation spectroscopy (FCS) based methods can provide such data, but live-cell applications have proven extremely challenging. Here, we have developed an integrated dual-color dual-focus line-scanning fluorescence correlation spectroscopy (2c2f lsFCS) technique that greatly facilitates live-cell and tissue experiments. Absolute ligand and receptor concentrations and their diffusion coefficients within the cell membrane can be quantified without the need to perform additional calibration experiments. We also determine the concentration of ligands diffusing in the medium outside the cell within the same experiment by using a raster image correlation spectroscopy (RICS) based analysis. We have applied this robust technique to study the interactions of two Wnt antagonists, Dickkopf1 and Dickkopf2 (Dkk1/2), to their cognate receptor, low-density-lipoprotein-receptor related protein 6 (LRP6), in the plasma membrane of living HEK293T cells. We obtained significantly lower affinities than previously reported using in vitro studies, underscoring the need to measure such data on living cells or tissues.

  2. Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-03-01

    High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.

  3. Featured Image: Revealing Hidden Objects with Color

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2018-02-01

    Stunning color astronomical images can often be the motivation for astronomers to continue slogging through countless data files, calculations, and simulations as we seek to understand the mysteries of the universe. But sometimes the stunning images can, themselves, be the source of scientific discovery. This is the case with the below image of Lynds Dark Nebula 673, located in the Aquila constellation, that was captured with the Mayall 4-meter telescope at Kitt Peak National Observatory by a team of scientists led by Travis Rector (University of Alaska Anchorage). After creating the image with a novel color-composite imaging method that reveals faint H emission (visible in red in both images here), Rector and collaborators identified the presence of a dozen new Herbig-Haro objects small cloud patches that are caused when material is energetically flung out from newly born stars. The image adapted above shows three of the new objects, HH 118789, aligned with two previously known objects, HH 32 and 332 suggesting they are driven by the same source. For more beautiful images and insight into the authors discoveries, check out the article linked below!Full view of Lynds Dark Nebula 673. Click for the larger view this beautiful composite image deserves! [T.A. Rector (University of Alaska Anchorage) and H. Schweiker (WIYN and NOAO/AURA/NSF)]CitationT. A. Rector et al 2018 ApJ 852 13. doi:10.3847/1538-4357/aa9ce1

  4. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the

  5. Improvement to the scanning electron microscope image adaptive Canny optimization colorization by pseudo-mapping.

    PubMed

    Lo, T Y; Sim, K S; Tso, C P; Nia, M E

    2014-01-01

    An improvement to the previously proposed adaptive Canny optimization technique for scanning electron microscope image colorization is reported. The additional feature, called pseudo-mapping technique, is that the grayscale markings are temporarily mapped to a set of pre-defined pseudo-color map as a mean to instill color information for grayscale colors in chrominance channels. This allows the presence of grayscale markings to be identified; hence optimization colorization of grayscale colors is made possible. This additional feature enhances the flexibility of scanning electron microscope image colorization by providing wider range of possible color enhancement. Furthermore, the nature of this technique also allows users to adjust the luminance intensities of selected region from the original image within certain extent. © 2014 Wiley Periodicals, Inc.

  6. Multi-colored fibers by self-assembly of DNA, histone proteins, and cationic conjugated polymers.

    PubMed

    Wang, Fengyan; Liu, Zhang; Wang, Bing; Feng, Liheng; Liu, Libing; Lv, Fengting; Wang, Yilin; Wang, Shu

    2014-01-07

    The development of biomolecular fiber materials with imaging ability has become more and more useful for biological applications. In this work, cationic conjugated polymers (CCPs) were used to construct inherent fluorescent microfibers with natural biological macromolecules (DNA and histone proteins) through the interfacial polyelectrolyte complexation (IPC) procedure. Isothermal titration microcalorimetry results show that the driving forces for fiber formation are electrostatic and hydrophobic interactions, as well as the release of counterions and bound water molecules. Color-encoded IPC fibers were also obtained based on the co-assembly of DNA, histone proteins, and blue-, green-, or red- (RGB-) emissive CCPs by tuning the fluorescence resonance energy-transfer among the CCPs at a single excitation wavelength. The fibers could encapsulate GFP-coded Escherichia coli BL21, and the expression of GFP proteins was successfully regulated by the external environment of the fibers. These multi-colored fibers show a great potential in biomedical applications, such as biosensor, delivery, and release of biological molecules and tissue engineering. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. HDR imaging and color constancy: two sides of the same coin?

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2011-01-01

    At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances. Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However, on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?

  8. [Image Feature Extraction and Discriminant Analysis of Xinjiang Uygur Medicine Based on Color Histogram].

    PubMed

    Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat

    2015-06-01

    Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.

  9. Face detection in color images using skin color, Laplacian of Gaussian, and Euler number

    NASA Astrophysics Data System (ADS)

    Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek

    2010-02-01

    In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.

  10. [Color processing of ultrasonographic images in extracorporeal lithotripsy].

    PubMed

    Lardennois, B; Ziade, A; Walter, K

    1991-02-01

    A number of technical difficulties are encountered in the ultrasonographic detection of renal stones which unfortunately limit its performance. The margin of error of firing in extracorporeal shock-wave lithotripsy (ESWL) must be reduced to a minimum. The role of the ultrasonographic monitoring during lithotripsy is also essential: continuous control of the focussing of the short-wave beamand assessment if the quality of fragmentation. The authors propose to improve ultrasonographic imaging in ESWL by means of intraoperative colour processing of the stone. Each shot must be directed to its target with an economy of vision avoiding excessive fatigue. The principle of the technique consists of digitalization of the ultrasound video images using a Macintosh Mac 2 computer. The Graphis Paint II program is interfaced directly with the Quick Capture card and recovers the images on its work surface in real time. The program is then able to attribute to each of these 256 shades of grey any one of the 16.6 million colours of the Macintosh universe with specific intensity and saturation. During fragmentation, using the principle of a palette, the stone changes colour from green to red indicating complete fragmentation. A Color Space card converts the digital image obtained into a video analogue source which is visualized on the monitor. It can be superimposed and/or juxtaposed with the source image by means of a multi-standard mixing table. Colour processing of ultrasonographic images in extracoporeal shockwave lithotripsy allows better visualization of the stones and better follow-up of fragmentation and allows the shockwave treatment to be stopped earlier. It increases the stone-free performance at 6 months. This configuration will eventually be able to integrate into the ultrasound apparatus itself.

  11. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    NASA Astrophysics Data System (ADS)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  12. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    PubMed

    Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan

    2012-01-01

    We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  13. False Color Image of Volcano Sapas Mons

    NASA Image and Video Library

    1996-02-05

    This false-color image obtained by NASA Magellan spacecraft shows the volcano Sapas Mons, which is located in the broad equatorial rise called Atla Regio. http://photojournal.jpl.nasa.gov/catalog/PIA00203

  14. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  15. a New Graduation Algorithm for Color Balance of Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.

    2018-05-01

    In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.

  16. Dual-color 3D superresolution microscopy by combined spectral-demixing and biplane imaging.

    PubMed

    Winterflood, Christian M; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-07-07

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  17. Color image encryption using random transforms, phase retrieval, chaotic maps, and diffusion

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Rushdi, M. A.; Nehary, E. A.

    2018-04-01

    The recent tremendous proliferation of color imaging applications has been accompanied by growing research in data encryption to secure color images against adversary attacks. While recent color image encryption techniques perform reasonably well, they still exhibit vulnerabilities and deficiencies in terms of statistical security measures due to image data redundancy and inherent weaknesses. This paper proposes two encryption algorithms that largely treat these deficiencies and boost the security strength through novel integration of the random fractional Fourier transforms, phase retrieval algorithms, as well as chaotic scrambling and diffusion. We show through detailed experiments and statistical analysis that the proposed enhancements significantly improve security measures and immunity to attacks.

  18. Color image analysis of contaminants and bacteria transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric

    1997-10-01

    Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.

  19. FIR filters for hardware-based real-time multi-band image blending

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Leblebici, Yusuf

    2015-02-01

    Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.

  20. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  1. Improved telescope focus using only two focus images

    NASA Astrophysics Data System (ADS)

    Barrick, Gregory; Vermeulen, Tom; Thomas, James

    2008-07-01

    In an effort to reduce the amount of time spent focusing the telescope and to improve the quality of the focus, a new procedure has been investigated and implemented at the Canada-France-Hawaii Telescope (CFHT). The new procedure is based on a paper by Tokovinin and Heathcote and requires only two out-of-focus images to determine the best focus for the telescope. Using only two images provides a great time savings over the five or more images required for a standard through-focus sequence. In addition, it has been found that this method is significantly less sensitive to seeing variations than the traditional through-focus procedure, so the quality of the resulting focus is better. Finally, the new procedure relies on a second moment calculation and so is computationally easier and more robust than methods using a FWHM calculation. The new method has been implemented for WIRCam for the past 18 months, for MegaPrime for the past year, and has recently been implemented for ESPaDOnS.

  2. SWT voting-based color reduction for text detection in natural scene images

    NASA Astrophysics Data System (ADS)

    Ikica, Andrej; Peer, Peter

    2013-12-01

    In this article, we propose a novel stroke width transform (SWT) voting-based color reduction method for detecting text in natural scene images. Unlike other text detection approaches that mostly rely on either text structure or color, the proposed method combines both by supervising text-oriented color reduction process with additional SWT information. SWT pixels mapped to color space vote in favor of the color they correspond to. Colors receiving high SWT vote most likely belong to text areas and are blocked from being mean-shifted away. Literature does not explicitly address SWT search direction issue; thus, we propose an adaptive sub-block method for determining correct SWT direction. Both SWT voting-based color reduction and SWT direction determination methods are evaluated on binary (text/non-text) images obtained from a challenging Computer Vision Lab optical character recognition database. SWT voting-based color reduction method outperforms the state-of-the-art text-oriented color reduction approach.

  3. A kind of color image segmentation algorithm based on super-pixel and PCNN

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  4. MO-DE-202-01: Image-Guided Focused Ultrasound Surgery and Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farahani, K.

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: Keyvan Farahani, “Image-guided focused ultrasound surgery and therapy” Jeffrey H. Siewerdsen, “Advances in image registration and reconstruction for image-guidedmore » neurosurgery” Tina Kapur, “Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite” Raj Shekhar, “Multimodality image-guided interventions: Multimodality for the rest of us” Learning Objectives: Understand the principles and applications of HIFU in surgical ablation. Learn about recent advances in 3D–2D and 3D deformable image registration in support of surgical safety and precision. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and R41

  5. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  6. False-color composite image of Prince Albert, Canada

    NASA Image and Video Library

    1994-04-11

    STS059-S-039 (11 April 1994) --- This is a false-color composite of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Space Shuttle Endeavour on its 20th orbit. The area is located 40 kilometers (25 miles) north and 30 kilometers (20 miles) east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface Highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. The look angle of the radar is 30 degrees and the size of the image is approximately 20 kilometers by 50 kilometers (12 by 30 miles). The image was produced by using only the L-Band. The three polarization channels HH, HV and VV are illustrated by red, green and blue respectively. The changes in the intensity of each color are related to various surface conditions such as variations in forest stands, frozen or thawed condition of the surface, disturbances (fire and deforestation), and areas of re-growth. Most of the dark areas in the image are the ice-covered lakes in the region. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of Highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are also shown by dark areas in the image. Since most of the logging practice at the Prince Albert area is around the major highways, the deforested areas can be easily detected as small geometrically shaped dark regions along the roads. At the time of the SIR-C/X-SAR overpass, a major part of the forest is either frozen or undergoing the spring thaw. The L-Band HH shows a high return in the jack pine forest. The reddish areas in the image are old jack pine forest, 12-17 meters (40-55 feet) in height and 60

  7. Applied learning-based color tone mapping for face recognition in video surveillance system

    NASA Astrophysics Data System (ADS)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  8. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  9. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  10. Superpixel-based segmentation of muscle fibers in multi-channel microscopy.

    PubMed

    Nguyen, Binh P; Heemskerk, Hans; So, Peter T C; Tucker-Kellogg, Lisa

    2016-12-05

    Confetti fluorescence and other multi-color genetic labelling strategies are useful for observing stem cell regeneration and for other problems of cell lineage tracing. One difficulty of such strategies is segmenting the cell boundaries, which is a very different problem from segmenting color images from the real world. This paper addresses the difficulties and presents a superpixel-based framework for segmentation of regenerated muscle fibers in mice. We propose to integrate an edge detector into a superpixel algorithm and customize the method for multi-channel images. The enhanced superpixel method outperforms the original and another advanced superpixel algorithm in terms of both boundary recall and under-segmentation error. Our framework was applied to cross-section and lateral section images of regenerated muscle fibers from confetti-fluorescent mice. Compared with "ground-truth" segmentations, our framework yielded median Dice similarity coefficients of 0.92 and higher. Our segmentation framework is flexible and provides very good segmentations of multi-color muscle fibers. We anticipate our methods will be useful for segmenting a variety of tissues in confetti fluorecent mice and in mice with similar multi-color labels.

  11. Detection and 3D reconstruction of traffic signs from multiple view color images

    NASA Astrophysics Data System (ADS)

    Soheilian, Bahman; Paparoditis, Nicolas; Vallet, Bruno

    2013-03-01

    3D reconstruction of traffic signs is of great interest in many applications such as image-based localization and navigation. In order to reflect the reality, the reconstruction process should meet both accuracy and precision. In order to reach such a valid reconstruction from calibrated multi-view images, accurate and precise extraction of signs in every individual view is a must. This paper presents first an automatic pipeline for identifying and extracting the silhouette of signs in every individual image. Then, a multi-view constrained 3D reconstruction algorithm provides an optimum 3D silhouette for the detected signs. The first step called detection, tackles with a color-based segmentation to generate ROIs (Region of Interests) in image. The shape of every ROI is estimated by fitting an ellipse, a quadrilateral or a triangle to edge points. A ROI is rejected if none of the three shapes can be fitted sufficiently precisely. Thanks to the estimated shape the remained candidates ROIs are rectified to remove the perspective distortion and then matched with a set of reference signs using textural information. Poor matches are rejected and the types of remained ones are identified. The output of the detection algorithm is a set of identified road signs whose silhouette in image plane is represented by and ellipse, a quadrilateral or a triangle. The 3D reconstruction process is based on a hypothesis generation and verification. Hypotheses are generated by a stereo matching approach taking into account epipolar geometry and also the similarity of the categories. The hypotheses that are plausibly correspond to the same 3D road sign are identified and grouped during this process. Finally, all the hypotheses of the same group are merged to generate a unique 3D road sign by a multi-view algorithm integrating a priori knowledges about 3D shape of road signs as constraints. The algorithm is assessed on real and synthetic images and reached and average accuracy of 3.5cm for

  12. Fuzzy Logic-Based Filter for Removing Additive and Impulsive Noise from Color Images

    NASA Astrophysics Data System (ADS)

    Zhu, Yuhong; Li, Hongyang; Jiang, Huageng

    2017-12-01

    This paper presents an efficient filter method based on fuzzy logics for adaptively removing additive and impulsive noise from color images. The proposed filter comprises two parts including noise detection and noise removal filtering. In the detection part, the fuzzy peer group concept is applied to determine what type of noise is added to each pixel of the corrupted image. In the filter part, the impulse noise is deducted by the vector median filter in the CIELAB color space and an optimal fuzzy filter is introduced to reduce the Gaussian noise, while they can work together to remove the mixed Gaussian-impulse noise from color images. Experimental results on several color images proves the efficacy of the proposed fuzzy filter.

  13. Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.

  14. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  15. Preparing Colorful Astronomical Images II

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.; Frattare, L. M.

    2002-12-01

    We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.

  16. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern

    PubMed Central

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-01-01

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602

  17. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    PubMed

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  18. True Color Image Analysis For Determination Of Bone Growth In Fluorochromic Biopsies

    NASA Astrophysics Data System (ADS)

    Madachy, Raymond J.; Chotivichit, Lee; Huang, H. K.; Johnson, Eric E.

    1989-05-01

    A true color imaging technique has been developed for analysis of microscopic fluorochromic bone biopsy images to quantify new bone growth. The technique searches for specified colors in a medical image for quantification of areas of interest. Based on a user supplied training set, a multispectral classification of pixel values is performed and used for segmenting the image. Good results were obtained when compared to manual tracings of new bone growth performed by an orthopedic surgeon. At a 95% confidence level, the hypothesis that there is no difference between the two methods can be accepted. Work is in progress to test bone biopsies with different colored stains and further optimize the analysis process using three-dimensional spectral ordering techniques.

  19. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  20. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  1. Natural-Color Image Mosaics of Afghanistan: Digital Databases and Maps

    USGS Publications Warehouse

    Davis, Philip A.; Hare, Trent M.

    2007-01-01

    Explanation: The 50 tiled images in this dataset are natural-color renditions of the calibrated six-band Landsat mosaics created from Landsat Enhanced Thematic Mapper Plus (ETM+) data. Natural-color images depict the surface as seen by the human eye. The calibration of the Landsat ETM+ maps produced by Davis (2006) are relative reflectance and need to be grounded with ground-reflectance data, but the difficulties in performing fieldwork in Afghanistan precluded ground-reflectance surveys. For natural color calibration, which involves only the blue, green, and red color bands of Landsat, we could use ground photographs, Munsell color readings of ground surfaces, or another image base that accurately depicts the surface color. Each map quadrangle is 1? of latitude by? of longitude. The numbers assigned to each map quadrangle refer to the latitude and longitude coordinates of the lower left corner of the quadrangle. For example, quadrangle Q2960 has its lower left corner at lat 29? N., long 60? E. Each quadrangle overlaps adjacent quadrangles by 100 pixels (2.85 km). Only the 14.25-m-spacial-resolution UTM and 28.5-m-spacial-resolution WGS84 geographic geotiff datasets are available in this report to decrease the amount of space needed. The images are (three-band, eight-bit) geotiffs with embedded georeferencing. As such, most software will not require the associated world files. An index of all available images in geographic is displayed here: Index_Geo_DD.pdf. The country of Afghanistan spans three UTM zones: (41-43). Maps are stored as geoTIFFs in their respective UTM zone projection. Indexes of all available topographic map sheets in their respective UTM zone are displayed here: Index_UTM_Z41.pdf, Index_UTM_Z42.pdf, Index_UTM_Z43.pdf. You will need Adobe Reader to view the PDF files. Download a copy of the latest version of Adobe Reader for free.

  2. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  3. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  4. Imaging tristimulus colorimeter for the evaluation of color in printed textiles

    NASA Astrophysics Data System (ADS)

    Hunt, Martin A.; Goddard, James S., Jr.; Hylton, Kathy W.; Karnowski, Thomas P.; Richards, Roger K.; Simpson, Marc L.; Tobin, Kenneth W., Jr.; Treece, Dale A.

    1999-03-01

    The high-speed production of textiles with complicated printed patterns presents a difficult problem for a colorimetric measurement system. Accurate assessment of product quality requires a repeatable measurement using a standard color space, such as CIELAB, and the use of a perceptually based color difference formula, e.g. (Delta) ECMC color difference formula. Image based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. This research and development effort describes a benchtop, proof-of-principle system that implements a projection onto convex sets (POCS) algorithm for mapping component color measurements to standard tristimulus values and incorporates structural and color based segmentation for improved precision and accuracy. The POCS algorithm consists of determining the closed convex sets that describe the constraints on the reconstruction of the true tristimulus values based on the measured imperfect values. We show that using a simulated D65 standard illuminant, commercial filters and a CCD camera, accurate (under perceptibility limits) per-region based (Delta) ECMC values can be measured on real textile samples.

  5. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM

    2008-09-02

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  6. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM

    2009-02-24

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  7. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  8. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    PubMed

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  9. A robust color image watermarking algorithm against rotation attacks

    NASA Astrophysics Data System (ADS)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  10. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  11. Color management systems: methods and technologies for increased image quality

    NASA Astrophysics Data System (ADS)

    Caretti, Maria

    1997-02-01

    All the steps in the imaging chain -- from handling the originals in the prepress to outputting them on any device - - have to be well calibrated and adjusted to each other, in order to reproduce color images in a desktop environment as accurate as possible according to the original. Today most of the steps in the prepress production are digital and therefore it is realistic to believe that the color reproduction can be well controlled. This is true thanks to the last years development of fast, cost effective scanners, digital sources and digital proofing devices not the least. It is likely to believe that well defined tools and methods to control this imaging flow will lead to large cost and time savings as well as increased overall image quality. Until now, there has been a lack of good, reliable, easy-to- use systems (e.g. hardware, software, documentation, training and support) in an extent that has made them accessible to the large group of users of graphic arts production systems. This paper provides an overview of the existing solutions to manage colors in a digital pre-press environment. Their benefits and limitations are discussed as well as how they affect the production workflow and organization. The difference between a color controlled environment and one that is not is explained.

  12. Color multiplexing method to capture front and side images with a capsule endoscope.

    PubMed

    Tseng, Yung-Chieh; Hsu, Hsun-Ching; Han, Pin; Tsai, Cheng-Mu

    2015-10-01

    This paper proposes a capsule endoscope (CE), based on color multiplexing, to simultaneously record front and side images. Only one lens associated with an X-cube prism is employed to catch the front and side view profiles in the CE. Three color filters and polarizers are placed on three sides of an X-cube prism. When objects locate at one of the X-cube's three sides, front and side view profiles of different colors will be caught through the proposed lens and recorded at the color image sensor. The proposed color multiplexing CE (CMCE) is designed with a field of view of up to 210 deg and a 180 lp/mm resolution under f-number 2.8 and overall length 13.323 mm. A ray-tracing simulation in the CMCE with the color multiplexing mechanism verifies that the CMCE not only records the front and side view profiles at the same time, but also has great image quality at a small size.

  13. The effect of different standard illumination conditions on color balance failure in offset printed images on glossy coated paper expressed by color difference

    NASA Astrophysics Data System (ADS)

    Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.

    2012-05-01

    One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.

  14. Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.

    PubMed

    Frey, Othmar; Morsdorf, Felix; Meier, Erich

    2008-09-24

    In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).

  15. Photographic copy of computer enhanced color photographic image. Photographer and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of computer enhanced color photographic image. Photographer and computer draftsman unknown. Original photographic image located in the office of Modjeski and Masters, Consulting Engineers at 1055 St. Charles Avenue, New Orleans, LA 70130. COMPUTER ENHANCED COLOR PHOTOGRAPH SHOWING THE PROPOSED HUEY P. LONG BRIDGE WIDENING LOOKING FROM THE WEST BANK TOWARD THE EAST BANK. - Huey P. Long Bridge, Spanning Mississippi River approximately midway between nine & twelve mile points upstream from & west of New Orleans, Jefferson, Jefferson Parish, LA

  16. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both

  17. Three frequency false-color image of Prince Albert, Canada

    NASA Image and Video Library

    1994-04-18

    STS059-S-079 (18 April 1994) --- This is a false-color, three frequency image of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. It was produced using data from the X-Band, C-Band and L-Band radars that comprise the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR). SIR-C/X-SAR acquired this image on the 20th orbit of the Space Shuttle Endeavour. The area is located 40 kilometers north and 30 kilometers east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface Highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. The look angle of the radar is 30 degrees and the size of the image is approximately 20 by 50 kilometers. The red, green, and blue colors represent L-Band total power, C-Band total power, and XVV respectively. The changes in the intensity of each color are related to various surface conditions such as frozen or thawed forest, fire, deforestation and areas of regrowth. Most of the dark blue areas in the image are the ice covered lakes. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of Highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are shown by light blue in the image. Since most of the logging practice at the Prince Albert area is around the major highways, the deforested areas can be easily detected as small geometrically shaped dark regions along the roads. At the time these data were taken, a major part of the forest was either frozen or undergoing the spring thaw. In such conditions, due to low volume of water in the vegetation, a deeper layer of the canopy is imaged by the radar, revealing valuable information about the type of trees, the amount of vegetation biomass and the condition of the surface

  18. The Application of Support Vector Machine (svm) Using Cielab Color Model, Color Intensity and Color Constancy as Features for Ortho Image Classification of Benthic Habitats in Hinatuan, Surigao del Sur, Philippines

    NASA Astrophysics Data System (ADS)

    Cubillas, J. E.; Japitana, M.

    2016-06-01

    This study demonstrates the application of CIELAB, Color intensity, and One Dimensional Scalar Constancy as features for image recognition and classifying benthic habitats in an image with the coastal areas of Hinatuan, Surigao Del Sur, Philippines as the study area. The study area is composed of four datasets, namely: (a) Blk66L005, (b) Blk66L021, (c) Blk66L024, and (d) Blk66L0114. SVM optimization was performed in Matlab® software with the help of Parallel Computing Toolbox to hasten the SVM computing speed. The image used for collecting samples for SVM procedure was Blk66L0114 in which a total of 134,516 sample objects of mangrove, possible coral existence with rocks, sand, sea, fish pens and sea grasses were collected and processed. The collected samples were then used as training sets for the supervised learning algorithm and for the creation of class definitions. The learned hyper-planes separating one class from another in the multi-dimensional feature space can be thought of as a super feature which will then be used in developing the C (classifier) rule set in eCognition® software. The classification results of the sampling site yielded an accuracy of 98.85% which confirms the reliability of remote sensing techniques and analysis employed to orthophotos like the CIELAB, Color Intensity and One dimensional scalar constancy and the use of SVM classification algorithm in classifying benthic habitats.

  19. The implementation of thermal image visualization by HDL based on pseudo-color

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Zhang, JiangLing

    2004-11-01

    The pseudo-color method which maps the sampled data to intuitive perception colors is a kind of powerful visualization way. And the all-around system of pseudo-color visualization, which includes the primary principle, model and HDL (Hardware Description Language) implementation for the thermal images, is expatiated on in the paper. The thermal images whose signal is modulated as video reflect the temperature distribution of measured object, so they have the speciality of mass and real-time. The solution to the intractable problem is as follows: First, the reasonable system, i.e. the combining of global pseudo-color visualization and local special area accurate measure, muse be adopted. Then, the HDL pseudo-color algorithms in SoC (System on Chip) carry out the system to ensure the real-time. Finally, the key HDL algorithms for direct gray levels connection coding, proportional gray levels map coding and enhanced gray levels map coding are presented, and its simulation results are showed. The pseudo-color visualization of thermal images implemented by HDL in the paper has effective application in the aspect of electric power equipment test and medical health diagnosis.

  20. Greenland's Coast in Holiday Colors

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Vibrant reds, emerald greens, brilliant whites, and pastel blues adorn this view of the area surrounding the Jakobshavn Glacier on the western coast of Greenland. The image is a false-color (near-infrared, green, blue) view acquired by the Multi-angle Imaging SpectroRadiometer's nadir camera. The brightness of vegetation in the near-infrared contributes to the reddish hues; glacial silt gives rise to the green color of the water; and blue-colored melt ponds are visible in the bright white ice. A scattering of small icebergs in Disco Bay adds a touch of glittery sparkle to the scene.

    The large island in the upper left is called Qeqertarsuaq. To the east of this island, and just above image center, is the outlet of the fast-flowing Jakobshavn (or Ilulissat) glacier. Jakobshavn is considered to have the highest iceberg production of all Greenland glaciers and is a major drainage outlet for a large portion of the western side of the ice sheet. Icebergs released from the glacier drift slowly with the ocean currents and pose hazards for shipping along the coast.

    The Multi-angle Imaging SpectroRadiometer views the daylit Earth continuously and the entire globe between 82 degrees north and 82 degrees south latitude is observed every 9 days. These data products were generated from a portion of the imagery acquired on June 18, 2003 during Terra orbit 18615. The image cover an area of about 254 kilometers x 210 kilometers, and use data from blocks 34 to 35 within World Reference System-2 path 10.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  1. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  2. Diffusion Tensor Magnetic Resonance Imaging Strategies for Color Mapping of Human Brain Anatomy

    PubMed Central

    Boujraf, Saïd

    2018-01-01

    Background: A color mapping of fiber tract orientation using diffusion tensor imaging (DTI) can be prominent in clinical practice. The goal of this paper is to perform a comparative study of visualized diffusion anisotropy in the human brain anatomical entities using three different color-mapping techniques based on diffusion-weighted imaging (DWI) and DTI. Methods: The first technique is based on calculating a color map from DWIs measured in three perpendicular directions. The second technique is based on eigenvalues derived from the diffusion tensor. The last technique is based on three eigenvectors corresponding to sorted eigenvalues derived from the diffusion tensor. All magnetic resonance imaging measurements were achieved using a 1.5 Tesla Siemens Vision whole body imaging system. A single-shot DW echoplanar imaging sequence used a Stejskal–Tanner approach. Trapezoidal diffusion gradients are used. The slice orientation was transverse. The basic measurement yielded a set of 13 images. Each series consists of a single image without diffusion weighting, besides two DWIs for each of the next six noncollinear magnetic field gradient directions. Results: The three types of color maps were calculated consequently using the DWI obtained and the DTI. Indeed, we established an excellent similarity between the image data in the color maps and the fiber directions of known anatomical structures (e.g., corpus callosum and gray matter). Conclusions: In the meantime, rotationally invariant quantities such as the eigenvectors of the diffusion tensor reflected better, the real orientation found in the studied tissue. PMID:29928631

  3. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  4. Validation of tablet-based evaluation of color fundus images

    PubMed Central

    Christopher, Mark; Moga, Daniela C.; Russell, Stephen R.; Folk, James C.; Scheetz, Todd; Abràmoff, Michael D.

    2012-01-01

    Purpose To compare diabetic retinopathy (DR) referral recommendations made by viewing fundus images using a tablet computer to recommendations made using a standard desktop display. Methods A tablet computer (iPad) and a desktop PC with a high-definition color display were compared. For each platform, two retinal specialists independently rated 1200 color fundus images from patients at risk for DR using an annotation program, Truthseeker. The specialists determined whether each image had referable DR, and also how urgently each patient should be referred for medical examination. Graders viewed and rated the randomly presented images independently and were masked to their ratings on the alternative platform. Tablet- and desktop display-based referral ratings were compared using cross-platform, intra-observer kappa as the primary outcome measure. Additionally, inter-observer kappa, sensitivity, specificity, and area under ROC (AUC) were determined. Results A high level of cross-platform, intra-observer agreement was found for the DR referral ratings between the platforms (κ=0.778), and for the two graders, (κ=0.812). Inter-observer agreement was similar for the two platforms (κ=0.544 and κ=0.625 for tablet and desktop, respectively). The tablet-based ratings achieved a sensitivity of 0.848, a specificity of 0.987, and an AUC of 0.950 compared to desktop display-based ratings. Conclusions In this pilot study, tablet-based rating of color fundus images for subjects at risk for DR was consistent with desktop display-based rating. These results indicate that tablet computers can be reliably used for clinical evaluation of fundus images for DR. PMID:22495326

  5. Separation of specular and diffuse components using tensor voting in color images.

    PubMed

    Nguyen, Tam; Vo, Quang Nhat; Yang, Hyung-Jeong; Kim, Soo-Hyung; Lee, Guee-Sang

    2014-11-20

    Most methods for the detection and removal of specular reflections suffer from nonuniform highlight regions and/or nonconverged artifacts induced by discontinuities in the surface colors, especially when dealing with highly textured, multicolored images. In this paper, a novel noniterative and predefined constraint-free method based on tensor voting is proposed to detect and remove the highlight components of a single color image. The distribution of diffuse and specular pixels in the original image is determined using tensors' saliency analysis, instead of comparing color information among neighbor pixels. The achieved diffuse reflectance distribution is used to remove specularity components. The proposed method is evaluated quantitatively and qualitatively over a dataset of highly textured, multicolor images. The experimental results show that our result outperforms other state-of-the-art techniques.

  6. Efficiency analysis of color image filtering

    NASA Astrophysics Data System (ADS)

    Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Abramov, Sergey K.; Egiazarian, Karen O.; Astola, Jaakko T.

    2011-12-01

    This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.

  7. Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT

    NASA Astrophysics Data System (ADS)

    Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.

    2009-01-01

    We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.

  8. Colored Chaos

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of

  9. Molecular breast tomosynthesis with scanning focus multi-pinhole cameras

    NASA Astrophysics Data System (ADS)

    van Roosmalen, Jarno; Goorden, Marlies C.; Beekman, Freek J.

    2016-08-01

    Planar molecular breast imaging (MBI) is rapidly gaining in popularity in diagnostic oncology. To add 3D capabilities, we introduce a novel molecular breast tomosynthesis (MBT) scanner concept based on multi-pinhole collimation. In our design, the patient lies prone with the pendant breast lightly compressed between transparent plates. Integrated webcams view the breast through these plates and allow the operator to designate the scan volume (e.g. a whole breast or a suspected region). The breast is then scanned by translating focusing multi-pinhole plates and NaI(Tl) gamma detectors together in a sequence that optimizes count yield from the volume-of-interest. With simulations, we compared MBT with existing planar MBI. In a breast phantom containing different lesions, MBT improved tumour-to-background contrast-to-noise ratio (CNR) over planar MBI by 12% and 111% for 4.0 and 6.0 mm lesions respectively in case of whole breast scanning. For the same lesions, much larger CNR improvements of 92% and 241% over planar MBI were found in a scan that focused on a breast region containing several lesions. MBT resolved 3.0 mm rods in a Derenzo resolution phantom in the transverse plane compared to 2.5 mm rods distinguished by planar MBI. While planar MBI cannot provide depth information, MBT offered 4.0 mm depth resolution. Our simulations indicate that besides offering 3D localization of increased tracer uptake, multi-pinhole MBT can significantly increase tumour-to-background CNR compared to planar MBI. These properties could be promising for better estimating the position, extend and shape of lesions and distinguishing between single and multiple lesions.

  10. Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders.

    PubMed

    Tapia-McClung, Horacio; Ajuria Ibarra, Helena; Rao, Dinesh

    2016-01-01

    Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology.

  11. Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders

    PubMed Central

    Ajuria Ibarra, Helena; Rao, Dinesh

    2016-01-01

    Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology. PMID:27902724

  12. Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT

    NASA Astrophysics Data System (ADS)

    Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep

    2014-08-01

    In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.

  13. Color Analysis of Periimplant Soft Tissues Focusing on Implant System: A Case Series.

    PubMed

    Varoni, Elena M; Moltrasio, Giuseppe; Gargano, Marco; Ludwig, Nicola; Lodi, Giovanni; Scaringi, Riccardo

    2017-04-01

    To assess the impact of implant system on color harmonization of periimplant mucosa. In this case series, color of periimplant mucosa was compared with color of natural tooth gingiva. Seventeen intercanine implants were analyzed (11 bone level [BL], 6 tissue level [TL] implants). Colorimetric data, at 2, 4, and 6 mm from gingival margin, were collected through fiber optics reflectance spectroscopy, and color differences calculated as ΔE. Dentists, dental students, and lay people, in blind, performed an additional visual color analysis on clinical images. Independently from implant system, the color of periimplant mucosa was significantly different from gingiva (ΔE = 8.2 ± 0.7), resulting darker at L* comparison (P ≤ 0.05). TL periimplant mucosa showed higher ΔE than BL (9.0 ± 1.0 vs 6.6 ± 0.8, respectively; P ≤ 0.05). Observers correctly identified where the implant was placed in about half of the cases, with no significant difference between implant systems. Within the limitations of this study, the color of periimplant soft tissues appears different from gingiva, at spectroscopic analysis. Color discrepancy results higher in the presence of TL implants than in BL implants, although the difference may not be clinically significant.

  14. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; hide

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  15. Progress in digital color workflow understanding in the International Color Consortium (ICC) Workflow WG

    NASA Astrophysics Data System (ADS)

    McCarthy, Ann

    2006-01-01

    The ICC Workflow WG serves as the bridge between ICC color management technologies and use of those technologies in real world color production applications. ICC color management is applicable to and is used in a wide range of color systems, from highly specialized digital cinema color special effects to high volume publications printing to home photography. The ICC Workflow WG works to align ICC technologies so that the color management needs of these diverse use case systems are addressed in an open, platform independent manner. This report provides a high level summary of the ICC Workflow WG objectives and work to date, focusing on the ways in which workflow can impact image quality and color systems performance. The 'ICC Workflow Primitives' and 'ICC Workflow Patterns and Dimensions' workflow models are covered in some detail. Consider the questions, "How much of dissatisfaction with color management today is the result of 'the wrong color transformation at the wrong time' and 'I can't get to the right conversion at the right point in my work process'?" Put another way, consider how image quality through a workflow can be negatively affected when the coordination and control level of the color management system is not sufficient.

  16. Three frequency false color image of Flevoland, the Netherlands

    NASA Image and Video Library

    1994-04-18

    STS059-S-086 (18 April 1994) --- This is a three-frequency false-color image of Flevoland, the Netherlands, centered at 52.4 degrees north latitude, and 5.4 degrees east longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Space Shuttle Endeavour on April 14, 1994. It was produced by combining data from the X-Band, C-Band and L-Band radar's. The area shown is approximately 25 by 28 kilometers (15 1/2 by 17 1/2 miles). Flevoland, which fills the lower two-thirds of the image, is a very flat area that is made up of reclaimed land that is used for agriculture and forestry. At the top of the image, across the canal from Flevoland, is an older forest shown in red; the city of Harderwijk is shown in white on the shore of the canal. At this time of the year, the agricultural fields are bare soil, and they show up in this images in blue. The changes in the brightness of the blue areas are equal to the changes in roughness. The dark blue areas are water and the small dots in the canal are boats. This SIR-C/X-SAR supersite is being used for both calibration and agricultural studies. Several soil and crop ground-truth studies will be conducted during the Shuttle flight. In addition, about 10 calibration devices and 10 corner reflectors have been deployed to calibrate and monitor the radar signal. One of these transponders can be seen as a bright star in the lower right quadrant of the image. This false-color image was made using L-Band total power in the red channel, C-Band total power in the green channel, and X-Band VV polarization in the blue channel. SIR-C/X-SAR is part of NASA's Mission to Planet Earth (MTPE). SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-Band (24 cm), C-Band (6 cm), and X-Band (3 cm). The multi-frequency data will be used by the

  17. Intrahepatic portosystemic venous shunt: diagnosis by color Doppler imaging.

    PubMed

    Kudo, M; Tomita, S; Tochio, H; Minowa, K; Todo, A

    1993-05-01

    Intrahepatic portosystemic venous shunt is a rare clinical entity; only 33 such cases have been reported. It may be congenital, or secondary to portal hypertension. Five patients with this disorder are presented, each of whom was diagnosed by color Doppler imaging, including waveform spectral analysis. One patient with clinical evidence of cirrhosis and portal hypertension had episodes of hepatic encephalopathy and elevated blood levels of ammonia. This patient had a large tubular shunt between the posterior branch of the portal vein and the inferior vena cava. Shunts of this type are considered to be collateral pathways which develop in the hepatic parenchyma as a result of portal hypertension. The other four patients had no evidence of liver disease, and all four evidenced an aneurysmal portohepatic venous shunt within the liver parenchyma. Shunts of this type are considered congenital. The diagnosis of intrahepatic portosystemic venous shunts was established by color Doppler imaging, which demonstrated a direct communication of color flow signals between the portal vein and hepatic vein, in addition to the characterization of the Doppler spectrum at each sampling point from a continuous waveform signal (portal vein) to a turbulent signal (aneurysmal cavity), and finally, to a biphasic waveform signal (hepatic vein). As demonstrated by the five patients, color Doppler imaging is useful in the diagnosis of an intrahepatic portosystemic hepatic venous shunt, and the measurement of shunt ratio may be useful in the follow-up and determining the therapeutic option.

  18. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  19. 78 FR 18611 - Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-27

    ...] Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments AGENCY: Food and...: The Food and Drug Administration (FDA) and cosponsor International Color Consortium (ICC) are announcing the following public workshop entitled ``Summit on Color in Medical Imaging: An International...

  20. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  1. Mimas Showing False Colors #1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

    The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in

  2. The quantitative control and matching of an optical false color composite imaging system

    NASA Astrophysics Data System (ADS)

    Zhou, Chengxian; Dai, Zixin; Pan, Xizhe; Li, Yinxi

    1993-10-01

    Design of an imaging system for optical false color composite (OFCC) capable of high-precision density-exposure time control and color balance is presented. The system provides high quality FCC image data that can be analyzed using a quantitative calculation method. The quality requirement to each part of the image generation system is defined, and the distribution of satellite remote sensing image information is analyzed. The proposed technology makes it possible to present the remote sensing image data more effectively and accurately.

  3. Quantum image encryption based on restricted geometric and color transformations

    NASA Astrophysics Data System (ADS)

    Song, Xian-Hua; Wang, Shen; Abd El-Latif, Ahmed A.; Niu, Xia-Mu

    2014-08-01

    A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.

  4. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. A fractal concentration area method for assigning a color palette for image representation

    NASA Astrophysics Data System (ADS)

    Cheng, Qiuming; Li, Qingmou

    2002-05-01

    Displaying the remotely sensed image with a proper color palette is the first task in any kind of image processing and pattern recognition in GIS and image processing environments. The purpose of displaying the image should be not only to provide a visual representation of the variance of the image, although this has been the primary objective of most conventional methods, but also the color palette should reflect real-world features on the ground which must be the primary objective of employing remotely sensed data. Although most conventional methods focus only on the first purpose of image representation, the concentration-area ( C- A plot) fractal method proposed in this paper aims to meet both purposes on the basis of pixel values and pixel value frequency distribution as well as spatial and geometrical properties of image patterns. The C- A method can be used to establish power-law relationships between the area A(⩾ s) with the pixel values greater than s and the pixel value s itself after plotting these values on log-log paper. A number of straight-line segments can be manually or automatically fitted to the points on the log-log paper, each representing a power-law relationship between the area A and the cutoff pixel value for s in a particular range. These straight-line segments can yield a group of cutoff values on the basis of which the image can be classified into discrete classes or zones. These zones usually correspond to the real-world features on the ground. A Windows program has been prepared in ActiveX format for implementing the C- A method and integrating it into other GIS and image processing systems. A case study of Landsat TM band 5 has been used to demonstrate the application of the method and the flexibility of the computer program.

  6. A multi-stage color model revisited: implications for a gene therapy cure for red-green colorblindness.

    PubMed

    Mancuso, Katherine; Mauck, Matthew C; Kuchenbecker, James A; Neitz, Maureen; Neitz, Jay

    2010-01-01

    In 1993, DeValois and DeValois proposed a 'multi-stage color model' to explain how the cortex is ultimately able to deconfound the responses of neurons receiving input from three cone types in order to produce separate red-green and blue-yellow systems, as well as segregate luminance percepts (black-white) from color. This model extended the biological implementation of Hurvich and Jameson's Opponent-Process Theory of color vision, a two-stage model encompassing the three cone types combined in a later opponent organization, which has been the accepted dogma in color vision. DeValois' model attempts to satisfy the long-remaining question of how the visual system separates luminance information from color, but what are the cellular mechanisms that establish the complicated neural wiring and higher-order operations required by the Multi-stage Model? During the last decade and a half, results from molecular biology have shed new light on the evolution of primate color vision, thus constraining the possibilities for the visual circuits. The evolutionary constraints allow for an extension of DeValois' model that is more explicit about the biology of color vision circuitry, and it predicts that human red-green colorblindness can be cured using a retinal gene therapy approach to add the missing photopigment, without any additional changes to the post-synaptic circuitry.

  7. Hyperspectral image analysis using artificial color

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  8. Highly Efficient and Excitation Tunable Two-Photon Luminescence Platform For Targeted Multi-Color MDRB Imaging Using Graphene Oxide

    NASA Astrophysics Data System (ADS)

    Pramanik, Avijit; Fan, Zhen; Chavva, Suhash Reddy; Sinha, Sudarson Sekhar; Ray, Paresh Chandra

    2014-08-01

    Multiple drug-resistance bacteria (MDRB) infection is one of the top three threats to human health according to the World Health Organization (WHO). Due to the large penetration depth and reduced photodamage, two-photon imaging is an highly promising technique for clinical MDRB diagnostics. Since most commercially available water-soluble organic dyes have low two-photon absorption cross-section and rapid photobleaching tendency, their applications in two-photon imaging is highly limited. Driven by the need, in this article we report extremely high two-photon absorption from aptamer conjugated graphene oxide (σ2PA = 50800 GM) which can be used for highly efficient two-photon fluorescent probe for MDRB imaging. Reported experimental data show that two-photon photoluminescence imaging color, as well as luminescence peak position can be tuned from deep blue to red, just by varying the excitation wavelength without changing its chemical composition and size. We have demonstrated that graphene oxide (GO) based two-photon fluorescence probe is capable of imaging of multiple antibiotics resistance MRSA in the first and second biological transparency windows using 760-1120 nm wavelength range.

  9. Visibility enhancement of color images using Type-II fuzzy membership function

    NASA Astrophysics Data System (ADS)

    Singh, Harmandeep; Khehra, Baljit Singh

    2018-04-01

    Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.

  10. Luminance contours can gate afterimage colors and "real" colors.

    PubMed

    Anstis, Stuart; Vergeer, Mark; Van Lier, Rob

    2012-09-06

    It has long been known that colored images may elicit afterimages in complementary colors. We have already shown (Van Lier, Vergeer, & Anstis, 2009) that one and the same adapting image may result in different afterimage colors, depending on the test contours presented after the colored image. The color of the afterimage depends on two adapting colors, those both inside and outside the test. Here, we further explore this phenomenon and show that the color-contour interactions shown for afterimage colors also occur for "real" colors. We argue that similar mechanisms apply for both types of stimulation.

  11. Generating description with multi-feature fusion and saliency maps of image

    NASA Astrophysics Data System (ADS)

    Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo

    2018-04-01

    Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.

  12. Quantum color image watermarking based on Arnold transformation and LSB steganography

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng

    In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.

  13. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signalsmore » for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The

  14. Digital image modification detection using color information and its histograms.

    PubMed

    Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na

    2016-09-01

    The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.

  15. Color naming: color scientists do it between Munsell sheets of color

    NASA Astrophysics Data System (ADS)

    Beretta, Giordano B.; Moroney, Nathan M.

    2010-01-01

    With the advent of high dynamic range imaging and wide gamut color spaces, gamut mapping algorithms have to nudge image colors much more drastically to constrain them within a rendering device's gamut. Classical colorimetry is concerned with color matching and the developed color difference metrics are for small distances. For larger distances, categorization becomes a more useful concept. In the gamut mapping case, lexical distance induced by color names is a more useful metric, which translates to the condition that a nudged color may not cross a name boundary. The new problem is to find these color name boundaries. We compare the experimental procedures used for color naming by linguists, ethnologists, and color scientists and propose a methodology that leads to robust repeatable experiments.

  16. Color imaging of Mars by the High Resolution Imaging Science Experiment (HiRISE)

    USGS Publications Warehouse

    Delamere, W.A.; Tornabene, L.L.; McEwen, A.S.; Becker, K.; Bergstrom, J.W.; Bridges, N.T.; Eliason, E.M.; Gallagher, D.; Herkenhoff, K. E.; Keszthelyi, L.; Mattson, S.; McArthur, G.K.; Mellon, M.T.; Milazzo, M.; Russell, P.S.; Thomas, N.

    2010-01-01

    HiRISE has been producing a large number of scientifically useful color products of Mars and other planetary objects. The three broad spectral bands, coupled with the highly sensitive 14 bit detectors and time delay integration, enable detection of subtle color differences. The very high spatial resolution of HiRISE can augment the mineralogic interpretations based on multispectral (THEMIS) and hyperspectral datasets (TES, OMEGA and CRISM) and thereby enable detailed geologic and stratigraphic interpretations at meter scales. In addition to providing some examples of color images and their interpretation, we describe the processing techniques used to produce them and note some of the minor artifacts in the output. We also provide an example of how HiRISE color products can be effectively used to expand mineral and lithologic mapping provided by CRISM data products that are backed by other spectral datasets. The utility of high quality color data for understanding geologic processes on Mars has been one of the major successes of HiRISE. ?? 2009 Elsevier Inc.

  17. Acquisition and visualization techniques for narrow spectral color imaging.

    PubMed

    Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón

    2013-06-01

    This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.

  18. Full-color large-scaled computer-generated holograms for physical and non-physical objects

    NASA Astrophysics Data System (ADS)

    Matsushima, Kyoji; Tsuchiyama, Yasuhiro; Sonobe, Noriaki; Masuji, Shoya; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-05-01

    Several full-color high-definition CGHs are created for reconstructing 3D scenes including real-existing physical objects. The field of the physical objects are generated or captured by employing three techniques; 3D scanner, synthetic aperture digital holography, and multi-viewpoint images. Full-color reconstruction of high-definition CGHs is realized by RGB color filters. The optical reconstructions are presented for verifying these techniques.

  19. False-color L-band image of Manaus region of Brazil

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This false-color L-band image of the Manaus region of Brazil was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperature Radar (SIR-C/X-SAR) flying on the Space Shuttle Endeavour on its 46th orbit. The area shown is approximately 8 kilometers by 40 kilometers (5 by 25 miles). At the top of the image are the Solimoes and Rio Negro River. The image is centered at about 3 degrees south latitude, and 61 degrees west longitude. Blue areas show low returns at VV poloarization; hence the bright blue colors of the smooth river surfaces. Green areas in the image are heavily forested, while blue areas are either cleared forest or open water. The yellow and red areas are flooded forest. Between Rio Solimoes and Rio Negro, a road can be seen running from some cleared areas (visible as blue rectangles north of Rio Solimoes) north toward a tributary or Rio Negro. The Jet Propulsion Laboratory alternative photo number is P-43895.

  20. Qualitative evaluations and comparisons of six night-vision colorization methods

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul

    2013-05-01

    Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF < SM < SM-JHM < LUT < JHM < HM. It is anticipated that this work will provide a benchmark for NV

  1. Color Image Classification Using Block Matching and Learning

    NASA Astrophysics Data System (ADS)

    Kondo, Kazuki; Hotta, Seiji

    In this paper, we propose block matching and learning for color image classification. In our method, training images are partitioned into small blocks. Given a test image, it is also partitioned into small blocks, and mean-blocks corresponding to each test block are calculated with neighbor training blocks. Our method classifies a test image into the class that has the shortest total sum of distances between mean blocks and test ones. We also propose a learning method for reducing memory requirement. Experimental results show that our classification outperforms other classifiers such as support vector machine with bag of keypoints.

  2. Evaluation of VIIRS ocean color products

    NASA Astrophysics Data System (ADS)

    Wang, Menghua; Liu, Xiaoming; Jiang, Lide; Son, SeungHyun; Sun, Junqiang; Shi, Wei; Tan, Liqin; Naik, Puneeta; Mikelsons, Karlis; Wang, Xiaolong; Lance, Veronica

    2014-11-01

    The Suomi National Polar-orbiting Partnership (SNPP) was successfully launched on October 28, 2011. The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi NPP, which has 22 spectral bands (from visible to infrared) similar to the NASA's Moderate Resolution Imaging Spectroradiometer (MODIS), is a multi-disciplinary sensor providing observations for the Earth's atmosphere, land, and ocean properties. In this paper, we provide some evaluations and assessments of VIIRS ocean color data products, or ocean color Environmental Data Records (EDR), including normalized water-leaving radiance spectra nLw(λ) at VIIRS five spectral bands, chlorophyll-a (Chl-a) concentration, and water diffuse attenuation coefficient at the wavelength of 490 nm Kd(490). Specifically, VIIRS ocean color products derived from the NOAA Multi-Sensor Level-1 to Level-2 (NOAA-MSL12) ocean color data processing system are evaluated and compared with MODIS ocean color products and in situ measurements. MSL12 is now NOAA's official ocean color data processing system for VIIRS. In addition, VIIRS Sensor Data Records (SDR or Level- 1B data) have been evaluated. In particular, VIIRS SDR and ocean color EDR have been compared with a series of in situ data from the Marine Optical Buoy (MOBY) in the waters off Hawaii. A notable discrepancy of global deep water Chl-a derived from MODIS and VIIRS between 2012 and 2013 is observed. This discrepancy is attributed to the SDR (or Level-1B data) calibration issue and particularly related to VIIRS green band at 551 nm. To resolve this calibration issue, we have worked on our own sensor calibration by combining the lunar calibration effect into the current calibration method. The ocean color products derived from our new calibrated SDR in the South Pacific Gyre show that the Chl-a differences between 2012 and 2013 are significantly reduced. Although there are still some issues, our results show that VIIRS is capable of providing high-quality global

  3. A color management system for multi-colored LED lighting

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen; Corell, Dennis D.; Dam-Hansen, Carsten

    2015-09-01

    A new color control system is described and implemented for a five-color LED light engine, covering a wide white gamut. The system combines a new way of using pre-calibrated lookup tables and a rule-based optimization of chromaticity distance from the Planckian locus with a calibrated color sensor. The color sensor monitors the chromaticity of the mixed light providing the correction factor for the current driver by using the generated lookup table. The long term stability and accuracy of the system will be experimentally investigated with target tolerance within a circle radius of 0.0013 in the uniform chromaticity diagram (CIE1976).

  4. Iapetus: Unique Surface Properties and a Global Color Dichotomy from Cassini Imaging

    NASA Astrophysics Data System (ADS)

    Denk, Tilmann; Neukum, Gerhard; Roatsch, Thomas; Porco, Carolyn C.; Burns, Joseph A.; Galuba, Götz G.; Schmedemann, Nico; Helfenstein, Paul; Thomas, Peter C.; Wagner, Roland J.; West, Robert A.

    2010-01-01

    Since 2004, Saturn’s moon Iapetus has been observed repeatedly with the Imaging Science Subsystem of the Cassini spacecraft. The images show numerous impact craters down to the resolution limit of ~10 meters per pixel. Small, bright craters within the dark hemisphere indicate a dark blanket thickness on the order of meters or less. Dark, equator-facing and bright, poleward-facing crater walls suggest temperature-driven water-ice sublimation as the process responsible for local albedo patterns. Imaging data also reveal a global color dichotomy, wherein both dark and bright materials on the leading side have a substantially redder color than the respective trailing-side materials. This global pattern indicates an exogenic origin for the redder leading-side parts and suggests that the global color dichotomy initiated the thermal formation of the global albedo dichotomy.

  5. Iapetus: unique surface properties and a global color dichotomy from Cassini imaging.

    PubMed

    Denk, Tilmann; Neukum, Gerhard; Roatsch, Thomas; Porco, Carolyn C; Burns, Joseph A; Galuba, Götz G; Schmedemann, Nico; Helfenstein, Paul; Thomas, Peter C; Wagner, Roland J; West, Robert A

    2010-01-22

    Since 2004, Saturn's moon Iapetus has been observed repeatedly with the Imaging Science Subsystem of the Cassini spacecraft. The images show numerous impact craters down to the resolution limit of approximately 10 meters per pixel. Small, bright craters within the dark hemisphere indicate a dark blanket thickness on the order of meters or less. Dark, equator-facing and bright, poleward-facing crater walls suggest temperature-driven water-ice sublimation as the process responsible for local albedo patterns. Imaging data also reveal a global color dichotomy, wherein both dark and bright materials on the leading side have a substantially redder color than the respective trailing-side materials. This global pattern indicates an exogenic origin for the redder leading-side parts and suggests that the global color dichotomy initiated the thermal formation of the global albedo dichotomy.

  6. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  7. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  8. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators.

    PubMed

    Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T

    2011-05-31

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision.

  9. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Breaking the color barrier - a multi-selective antibody reporter offers innovative strategies of fluorescence detection.

    PubMed

    Gallo, Eugenio; Jarvik, Jonathan W

    2017-08-01

    A novel bi-partite fluorescence platform exploits the high affinity and selectivity of antibody scaffolds to capture and activate small-molecule fluorogens. In this report, we investigated the property of multi-selectivity activation by a single antibody against diverse cyanine family fluorogens. Our fluorescence screen identified three cell-impermeant fluorogens, each with unique emission spectra (blue, green and red) and nanomolar affinities. Most importantly, as a protein fusion tag to G-protein-coupled receptors, the antibody biosensor retained full activity - displaying bright fluorogen signals with minimal background on live cells. Because fluorogen-activating antibodies interact with their target ligands via non-covalent interactions, we were able to perform advanced multi-color detection strategies on live cells, previously difficult or impossible with conventional reporters. We found that by fine-tuning the concentrations of the different color fluorogen molecules in solution, a user may interchange the fluorescence signal (onset versus offset), execute real-time signal exchange via fluorogen competition, measure multi-channel fluorescence via co-labeling, and assess real-time cell surface receptor traffic via pulse-chase experiments. Thus, here we inform of an innovative reporter technology based on tri-color signal that allows user-defined fluorescence tuning in live-cell applications. © 2017. Published by The Company of Biologists Ltd.

  11. Mississippi Delta, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The geography of the New Orleans and Mississippi delta region is well shown in this radar image from the Shuttle Radar Topography Mission. In this image, bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is situated along the southern shore of Lake Pontchartrain, the large, roughly circular lake near the center of the image. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest over water highway bridge. Major portions of the city of New Orleans are below sea level, and although it is protected by levees and sea walls, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data

  12. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  13. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    NASA Astrophysics Data System (ADS)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  14. Demosaiced pixel super-resolution in digital holography for multiplexed computational color imaging on-a-chip (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2017-03-01

    Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.

  15. Spatial optical crosstalk in CMOS image sensors integrated with plasmonic color filters.

    PubMed

    Yu, Yan; Chen, Qin; Wen, Long; Hu, Xin; Zhang, Hui-Fang

    2015-08-24

    Imaging resolution of complementary metal oxide semiconductor (CMOS) image sensor (CIS) keeps increasing to approximately 7k × 4k. As a result, the pixel size shrinks down to sub-2μm, which greatly increases the spatial optical crosstalk. Recently, plasmonic color filter was proposed as an alternative to conventional colorant pigmented ones. However, there is little work on its size effect and the spatial optical crosstalk in a model of CIS. By numerical simulation, we investigate the size effect of nanocross array plasmonic color filters and analyze the spatial optical crosstalk of each pixel in a Bayer array of a CIS with a pixel size of 1μm. It is found that the small pixel size deteriorates the filtering performance of nanocross color filters and induces substantial spatial color crosstalk. By integrating the plasmonic filters in the low Metal layer in standard CMOS process, the crosstalk reduces significantly, which is compatible to pigmented filters in a state-of-the-art backside illumination CIS.

  16. Four-dimensional ultrasonography of the fetal heart using color Doppler spatiotemporal image correlation.

    PubMed

    Gonçalves, Luís F; Romero, Roberto; Espinoza, Jimmy; Lee, Wesley; Treadwell, Marjorie; Chintala, Kavitha; Brandl, Helmut; Chaiworapongsa, Tinnakorn

    2004-04-01

    To describe clinical and research applications of 4-dimensional imaging of the fetal heart using color Doppler spatiotemporal image correlation. Forty-four volume data sets were acquired by color Doppler spatiotemporal image correlation. Seven subjects were examined: 4 fetuses without abnormalities, 1 fetus with ventriculomegaly and a hypoplastic cerebellum but normal cardiac anatomy, and 2 fetuses with cardiac anomalies detected by fetal echocardiography (1 case of a ventricular septal defect associated with trisomy 21 and 1 case of a double-inlet right ventricle with a 46,XX karyotype). The median gestational age at the time of examination was 21 3/7 weeks (range, 19 5/7-34 0/7 weeks). Volume data sets were reviewed offline by multiplanar display and volume-rendering methods. Representative images and online video clips illustrating the diagnostic potential of this technology are presented. Color Doppler spatiotemporal image correlation allowed multiplanar visualization of ventricular septal defects, multiplanar display and volume rendering of tricuspid regurgitation, volume rendering of the outflow tracts by color and power Doppler ultrasonography (both in a normal case and in a case of a double-inlet right ventricle with a double-outlet right ventricle), and visualization of venous streams at the level of the foramen ovale. Color Doppler spatiotemporal image correlation has the potential to simplify visualization of the outflow tracts and improve the evaluation of the location and extent of ventricular septal defects. Other applications include 3-dimensional evaluation of regurgitation jets and venous streams at the level of the foramen ovale.

  17. Intra- and inter-rater reliability of digital image analysis for skin color measurement

    PubMed Central

    Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison

    2013-01-01

    Background We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Methods Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe® Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor® in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Conclusion Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. PMID:23551208

  18. Color segmentation in the HSI color space using the K-means algorithm

    NASA Astrophysics Data System (ADS)

    Weeks, Arthur R.; Hague, G. Eric

    1997-04-01

    Segmentation of images is an important aspect of image recognition. While grayscale image segmentation has become quite a mature field, much less work has been done with regard to color image segmentation. Until recently, this was predominantly due to the lack of available computing power and color display hardware that is required to manipulate true color images (24-bit). TOday, it is not uncommon to find a standard desktop computer system with a true-color 24-bit display, at least 8 million bytes of memory, and 2 gigabytes of hard disk storage. Segmentation of color images is not as simple as segmenting each of the three RGB color components separately. The difficulty of using the RGB color space is that it doesn't closely model the psychological understanding of color. A better color model, which closely follows that of human visual perception is the hue, saturation, intensity model. This color model separates the color components in terms of chromatic and achromatic information. Strickland et al. was able to show the importance of color in the extraction of edge features form an image. His method enhances the edges that are detectable in the luminance image with information from the saturation image. Segmentation of both the saturation and intensity components is easily accomplished with any gray scale segmentation algorithm, since these spaces are linear. The modulus 2(pi) nature of the hue color component makes its segmentation difficult. For example, a hue of 0 and 2(pi) yields the same color tint. Instead of applying separate image segmentation to each of the hue, saturation, and intensity components, a better method is to segment the chromatic component separately from the intensity component because of the importance that the chromatic information plays in the segmentation of color images. This paper presents a method of using the gray scale K-means algorithm to segment 24-bit color images. Additionally, this paper will show the importance the hue

  19. Multi-color Long Wavelength Infrared Detectors Based On III-V Semiconductors

    DTIC Science & Technology

    2010-07-30

    both interband and intersubband transitions that form the basis of may optoelectronic devices. The research performed under this grant made it...based on interband and intersubband transitions in InAs and InGaAs QDs as a means for room temperature, multi-color photodetection in the visible...AM1.5 standard solar simulator. DOPING EFFECT ON INTERBAND AND INTERSUBBAND MULTICOLOR INFRARED PHOTODETECTORS: First, many samples and devices

  20. Polar Cap Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 12 May 2004 This daytime visible color image was collected on June 6, 2003 during the Southern Spring season near the South Polar Cap Edge.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -77.8, Longitude 195 East (165 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA