Science.gov

Sample records for adaptive color image

  1. Color image diffusion using adaptive bilateral filter.

    PubMed

    Xie, Jun; Ann Heng, Pheng

    2005-01-01

    In this paper, we propose an approach to diffuse color images based on the bilateral filter. Real image data has a level of uncertainty that is manifested in the variability of measures assigned to pixels. This uncertainty is usually interpreted as noise and considered an undesirable component of the image data. Image diffusion can smooth away small-scale structures and noise while retaining important features, thus improving the performances for many image processing algorithms such as image compression, segmentation and recognition. The bilateral filter is noniterative, simple and fast. It has been shown to give similar and possibly better filtering results than iterative approaches. However, the performance of this filter is greatly affected by the choose of the parameters of filtering kernels. In order to remove noise and maintain the significant features on images, we extend the bilateral filter by introducing an adaptive domain spread into the nonlinear diffusion scheme. For color images, we employ the CIE-Lab color system to describe input images and the filtering process is operated using three channels together. Our analysis shows that the proposed method is more suitable for preserving strong edges on noisy images than the original bilateral filter. Empirical results on both nature images and color medical images confirm the novel method's advantages, and show it can diffuse various kinds of color images correctly and efficiently.

  2. Local adaptive contrast enhancement for color images

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; den Hollander, Richard J. M.; Schavemaker, John G. M.; Schutte, Klamer

    2007-04-01

    A camera or display usually has a smaller dynamic range than the human eye. For this reason, objects that can be detected by the naked eye may not be visible in recorded images. Lighting is here an important factor; improper local lighting impairs visibility of details or even entire objects. When a human is observing a scene with different kinds of lighting, such as shadows, he will need to see details in both the dark and light parts of the scene. For grey value images such as IR imagery, algorithms have been developed in which the local contrast of the image is enhanced using local adaptive techniques. In this paper, we present how such algorithms can be adapted so that details in color images are enhanced while color information is retained. We propose to apply the contrast enhancement on color images by applying a grey value contrast enhancement algorithm to the luminance channel of the color signal. The color coordinates of the signal will remain the same. Care is taken that the saturation change is not too high. Gamut mapping is performed so that the output can be displayed on a monitor. The proposed technique can for instance be used by operators monitoring movements of people in order to detect suspicious behavior. To do this effectively, specific individuals should both be easy to recognize and track. This requires optimal local contrast, and is sometimes much helped by color when tracking a person with colored clothes. In such applications, enhanced local contrast in color images leads to more effective monitoring.

  3. The new adaptive enhancement algorithm on the degraded color images

    NASA Astrophysics Data System (ADS)

    Xue, Rong Kun; He, Wei; Li, Yufeng

    2016-10-01

    Based on the scene characteristics of frequency distribution in the degraded color images, the MSRCR method and wavelet transform in the paper are introduced respectively to enhance color images and the advantages and disadvantages of them are analyzed combining with the experiment, then the combination of improved MSRCR method and wavelet transform are proposed to enhance color images, it uses wavelet to decompose color images in order to increase the coefficient of low-level details and reduce top-level details to highlight the scene information, meanwhile, the method of improved MSRCR is used to enhance the low-frequency components of degraded images processed by wavelet, then the adaptive equalization is carried on to further enhance images, finally, the enhanced color images are acquired with the reconstruction of all the coefficients brought by the wavelet transform. Through the evaluation of the experimental results and data analysis, it shows that the method proposed in the paper is better than the separate use of wavelet transform and MSRCR method.

  4. Adaptive clutter rejection for ultrasound color Doppler imaging

    NASA Astrophysics Data System (ADS)

    Yoo, Yang Mo; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    We have developed a new adaptive clutter rejection technique where an optimum clutter filter is dynamically selected according to the varying clutter characteristics in ultrasound color Doppler imaging. The selection criteria have been established based on the underlying clutter characteristics (i.e., the maximum instantaneous clutter velocity and the clutter power) and the properties of various candidate clutter filters (e.g., projection-initialized infinite impulse response and polynomial regression). We obtained an average improvement of 3.97 dB and 3.27 dB in flow signal-to-clutter-ratio (SCR) compared to the conventional and down-mixing methods, respectively. These preliminary results indicate that the proposed adaptive clutter rejection method could improve the sensitivity and accuracy in flow velocity estimation for ultrasound color Doppler imaging. For a 192 x 256 color Doppler image with an ensemble size of 10, the proposed method takes only 57.2 ms, which is less than the acquisition time. Thus, the proposed method could be implemented in modern ultrasound systems, while providing improved clutter rejection and more accurate velocity estimation in real time.

  5. Multiobjective Image Color Quantization Algorithm Based on Self-Adaptive Hybrid Differential Evolution

    PubMed Central

    Xia, Xuewen

    2016-01-01

    In recent years, some researchers considered image color quantization as a single-objective problem and applied heuristic algorithms to solve it. This paper establishes a multiobjective image color quantization model with intracluster distance and intercluster separation as its objectives. Inspired by a multipopulation idea, a multiobjective image color quantization algorithm based on self-adaptive hybrid differential evolution (MoDE-CIQ) is then proposed to solve this model. Two numerical experiments on four common test images are conducted to analyze the effectiveness and competitiveness of the multiobjective model and the proposed algorithm. PMID:27738423

  6. Study of adaptive LLL/infrared image color fusion algorithm based on the environment illumination

    NASA Astrophysics Data System (ADS)

    Hu, Qing-ping; Zhang, Xiao-hui; Liu, Chao

    2016-10-01

    LLL (Low-light-level) / infrared image fusion can integrate both bands information of the target, it is beneficial for target detection and scene perception in the low visibility weather such as night, haze, rain, and snow. The quality of fused image is declined, when any channel image quality drops. There will be great changes in the brightness, contrast and noise on LLL images when environment illumination has obvious changes, but the current color fusion methods is not adapted to the environment illumination change in larger dynamic range. In this paper, LLL image characteristics are analyzed under different environment illumination, and a kind of adaptive color fusion method is proposed based on the RGB color space. The fused image can get better brightness and signal-to-noise ratio under the different intensity of illumination.

  7. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  8. Development of an adaptive bilateral filter for evaluating color image difference

    NASA Astrophysics Data System (ADS)

    Wang, Zhaohui; Hardeberg, Jon Yngve

    2012-04-01

    Spatial filtering, which aims to mimic the contrast sensitivity function (CSF) of the human visual system (HVS), has previously been combined with color difference formulae for measuring color image reproduction errors. These spatial filters attenuate imperceptible information in images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were conducted to evaluate the performance of our approach. The experimental sample images were reproduced with variations in six image attributes: lightness, chroma, hue, compression, noise, and sharpness/blurriness. The Pearson's correlation values between the model-predicted image difference and the observed difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and image appearance model.

  9. New adaptive clutter rejection for ultrasound color Doppler imaging: in vivo study.

    PubMed

    Yoo, Yang Mo; Kim, Yongmin

    2010-03-01

    Clutter rejection is essential for accurate flow estimation in ultrasound color Doppler imaging. In this article, we present a new adaptive clutter rejection (ACR) technique where an optimum filter is dynamically selected depending upon the underlying clutter characteristics (e.g., tissue acceleration and power). We compared the performance of the ACR method with other adaptive methods, i.e., down-mixing (DM) and adaptive clutter filtering (ACF), using in vivo data acquired from the kidney, liver and common carotid artery. With the kidney data, the ACR method provided an average improvement of 3.05 dB and 1.7 dB in flow signal-to-clutter ratio (SCR) compared with DM and ACF, respectively. With the liver data, SCR was improved by 2.75 dB and 1.8 dB over DM and ACF while no significant improvement with ACR was found in the common carotid artery data. Thus, the proposed adaptive method could provide more accurate flow estimation by improving clutter rejection in abdominal ultrasound color Doppler imaging pending validation.

  10. Segmentation of the optic disk in color eye fundus images using an adaptive morphological approach.

    PubMed

    Welfer, Daniel; Scharcanski, Jacob; Kitamura, Cleyson M; Dal Pizzol, Melissa M; Ludwig, Laura W B; Marinho, Diane Ruschel

    2010-02-01

    The identification of some important retinal anatomical regions is a prerequisite for the computer aided diagnosis of several retinal diseases. In this paper, we propose a new adaptive method for the automatic segmentation of the optic disk in digital color fundus images, using mathematical morphology. The proposed method has been designed to be robust under varying illumination and image acquisition conditions, common in eye fundus imaging. Our experimental results based on two publicly available eye fundus image databases are encouraging, and indicate that our approach potentially can achieve a better performance than other known methods proposed in the literature. Using the DRIVE database (which consists of 40 retinal images), our method achieves a success rate of 100% in the correct location of the optic disk, with 41.47% of mean overlap. In the DIARETDB1 database (which consists of 89 retinal images), the optic disk is correctly located in 97.75% of the images, with a mean overlap of 43.65%.

  11. Digital item adaptation for color vision variations

    NASA Astrophysics Data System (ADS)

    Song, Jaeil; Yang, Seungji; Kim, Cheonseog; Nam, Jaeho; Hong, Jin-Woo; Ro, Yong Man

    2003-06-01

    As color is more widely used to carry visual information in the multimedia content, ability to perceive color plays a crucial role in getting visual information. Regardless of color vision variations, one should have visual information equally. This paper proposes the adaptation technique for color vision variations in the MPEG-21 Digital Item Adaptation (DIA). DIA is performed respectively for severe color vision deficiency (dichromats) and for mild color vision deficiency (anomalous trichromats), according to the description of user characteristics about color vision variations. Adapted images are tested by simulation program for color vision variations so as to recognize the appearance of the adapted images in the color deficient vision. Experimental result shows that proposed adaptation technique works well in the MPEG-21 framework.

  12. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue

  13. Adaptive clutter filter in 2-D color flow imaging based on in vivo I/Q signal.

    PubMed

    Zhou, Xiaoming; Zhang, Congyao; Liu, Dong C

    2014-01-01

    Color flow imaging has been well applied in clinical diagnosis. For the high quality color flow images, clutter filter is important to separate the Doppler signals from blood and tissue. Traditional clutter filters, such as finite impulse response, infinite impulse response and regression filters, were applied, which are based on the hypothesis that the clutter signal is stationary or tissue moves slowly. However, in realistic clinic color flow imaging, the signals are non-stationary signals because of accelerated moving tissue. For most related papers, simulated RF signals are widely used without in vivo I/Q signal. Hence, in this paper, adaptive polynomial regression filter, which is down mixing with instantaneous clutter frequency, was proposed based on in vivo carotid I/Q signal in realistic color flow imaging. To get the best performance, the optimal polynomial order of polynomial regression filter and the optimal polynomial order for estimation of instantaneous clutter frequency respectively were confirmed. Finally, compared with the mean blood velocity and quality of 2-D color flow image, the experiment results show that adaptive polynomial regression filter, which is down mixing with instantaneous clutter frequency, can significantly enhance the mean blood velocity and get high quality 2-D color flow image.

  14. Adaptive color correction based on object color classification

    NASA Astrophysics Data System (ADS)

    Kotera, Hiroaki; Morimoto, Tetsuro; Yasue, Nobuyuki; Saito, Ryoichi

    1998-09-01

    An adaptive color management strategy depending on the image contents is proposed. Pictorial color image is classified into different object areas with clustered color distribution. Euclidian or Mahalanobis color distance measures, and maximum likelihood method based on Bayesian decision rule, are introduced to the classification. After the classification process, each clustered pixels are projected onto principal component space by Hotelling transform and the color corrections are performed for the principal components to be matched each other in between the individual clustered color areas of original and printed images.

  15. Adaptive optics retinal imaging reveals S-cone dystrophy in tritan color-vision deficiency

    NASA Astrophysics Data System (ADS)

    Baraas, Rigmor C.; Carroll, Joseph; Gunther, Karen L.; Chung, Mina; Williams, David R.; Foster, David H.; Neitz, Maureen

    2007-05-01

    Tritan color-vision deficiency is an autosomal dominant disorder associated with mutations in the short-wavelength-sensitive- (S-) cone-pigment gene. An unexplained feature of the disorder is that individuals with the same mutation manifest different degrees of deficiency. To date, it has not been possible to examine whether any loss of S-cone function is accompanied by physical disruption in the cone mosaic. Two related tritan subjects with the same novel mutation in their S-cone-opsin gene, but different degrees of deficiency, were examined. Adaptive optics was used to obtain high-resolution retinal images, which revealed distinctly different S-cone mosaics consistent with their discrepant phenotypes. In addition, a significant disruption in the regularity of the overall cone mosaic was observed in the subject completely lacking S-cone function. These results taken together with other recent findings from molecular genetics indicate that, with rare exceptions, tritan deficiency is progressive in nature.

  16. Adaptive clutter rejection for 3D color Doppler imaging: preliminary clinical study.

    PubMed

    Yoo, Yang Mo; Sikdar, Siddhartha; Karadayi, Kerem; Kolokythas, Orpheus; Kim, Yongmin

    2008-08-01

    In three-dimensional (3D) ultrasound color Doppler imaging (CDI), effective rejection of flash artifacts caused by tissue motion (clutter) is important for improving sensitivity in visualizing blood flow in vessels. Since clutter characteristics can vary significantly during volume acquisition, a clutter rejection technique that can adapt to the underlying clutter conditions is desirable for 3D CDI. We have previously developed an adaptive clutter rejection (ACR) method, in which an optimum filter is dynamically selected from a set of predesigned clutter filters based on the measured clutter characteristics. In this article, we evaluated the ACR method with 3D in vivo data acquired from 37 kidney transplant patients clinically indicated for a duplex ultrasound examination. We compared ACR against a conventional clutter rejection method, down-mixing (DM), using a commonly-used flow signal-to-clutter ratio (SCR) and a new metric called fractional residual clutter area (FRCA). The ACR method was more effective in removing the flash artifacts while providing higher sensitivity in detecting blood flow in the arcuate arteries and veins in the parenchyma of transplanted kidneys. ACR provided 3.4 dB improvement in SCR over the DM method (11.4 +/- 1.6 dB versus 8.0 +/- 2.0 dB, p < 0.001) and had lower average FRCA values compared with the DM method (0.006 +/- 0.003 versus 0.036 +/- 0.022, p < 0.001) for all study subjects. These results indicate that the new ACR method is useful for removing nonstationary tissue motion while improving the image quality for visualizing 3D vascular structure in 3D CDI.

  17. Color Adaptation for Color Deficient Learners.

    ERIC Educational Resources Information Center

    Johnson, Donald D.

    1995-01-01

    Describes a corrective method of color adaptation designed to allow most, if not all, individuals to participate in the learning process as well as social and work-related environments. Provides a concise summation of facts and theories concerning color deficiency. Includes anatomical drawings, graphs, and statistical data. (MJP)

  18. Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images.

    PubMed

    Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen

    2007-05-01

    The shape-adaptive discrete cosine transform ISA-DCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable block-DCT (B-DCT). Despite the near-optimal decorrelation and energy compaction properties, application of the SA-DCT has been rather limited, targeted nearly exclusively to video compression. In this paper, we present a novel approach to image filtering based on the SA-DCT. We use the SA-DCT in conjunction with the Anisotropic Local Polynomial Approximation-Intersection of Confidence Intervals technique, which defines the shape of the transform's support in a pointwise adaptive manner. The thresholded or attenuated SA-DCT coefficients are used to reconstruct a local estimate of the signal within the adaptive-shape support. Since supports corresponding to different points are in general overlapping, the local estimates are averaged together using adaptive weights that depend on the region's statistics. This approach can be used for various image-processing tasks. In this paper, we consider, in particular, image denoising and image deblocking and deringing from block-DCT compression. A special structural constraint in luminance-chrominance space is also proposed to enable an accurate filtering of color images. Simulation experiments show a state-of-the-art quality of the final estimate, both in terms of objective criteria and visual appearance. Thanks to the adaptive support, reconstructed edges are clean, and no unpleasant ringing artifacts are introduced by the fitted transform.

  19. Digital Color Image Restoration

    DTIC Science & Technology

    1975-08-01

    color image recording system is derived and the equations representing the model and the equations of colorimetry are expressed in matrix form. Computer ... algorithms are derived which correct color errors introduced by imperfections in the color recording system. The sources of color error which are

  20. An adaptive switching filter based on approximated variance for detection of impulse noise from color images.

    PubMed

    Pritamdas, K; Singh, Kh Manglem; Singh, L Lolitkumar

    2016-01-01

    A new adaptive switching algorithm is presented where two adaptive filters are switched correspondingly for lower and higher noise ratio of the image. An adaptive center weighted vector median filter is used for the lower noise ratio whereas for higher noise ratio the noisy pixels are detected based on the comparison of the difference between the mean of the vector pixels in the window and the approximated variance of the vector pixels in the window. Then the window comprising the detected noisy pixel is further considered where the pixels are given exponential weights according to their similarity to the other neighboring pixels, spatially and radio metrically. The noisy pixels are then replaced by the weighted average of the pixels within the window. The filter is able to preserve higher signal content in the higher noise ratio as compared to other robust filters in comparison. With a little high in computational complexity, this technique performs well both in lower and higher noise ratios. Simulation results on various RGB images show that the proposed algorithm outperforms many other existing nonlinear filters in terms of preservation of edges and fine details.

  1. Adaptive clutter rejection for ultrasound color flow imaging based on recursive eigendecomposition.

    PubMed

    You, Wei; Wang, Yuanyuan

    2009-10-01

    In the conventional eigenfilter used to reject clutter components of ultrasound color flow imaging, input samples are required to be statistically stationary. However, clutter movements may vary over the depth of the imaged area, which makes the eigenfilter less efficient. In the current study, a novel clutter rejection method is proposed based on the recursive eigendecomposition algorithm. In this method, the current eigenvector matrix of the ultrasound echo correlation matrix, which will be used to construct the clutter subspace, is determined by previous eigenvector matrices and the current input. After the estimated clutter signal is obtained by projecting the original input into the clutter space, each filtered output is eventually obtained by subtracting the estimated clutter signal from the original input. This procedure is iterated for each sample volume along the depth. During the updating process, a forgetting factor is introduced to determine proper weights for different inputs. Simulated data in 3 situations and in vivo data collected from human carotid arteries are used to compare the proposed method with other popular clutter filters. Results show that the proposed method can achieve the most accurate velocity profiles in all simulation situations and introduces the fewest velocity artifacts in the tissue region in the in vivo experiment.

  2. An approach for visibility improvement of dark color images using adaptive gamma correction and DCT-SVD

    NASA Astrophysics Data System (ADS)

    Tiwari, Mayank; Lamba, Subir Singh; Gupta, Bhupendra

    2016-07-01

    This paper proposes an efficient method to improve visibility of dark color images and video sequences. Visibility improvement of dark color images play a significant role in computer vision, digital image processing and pattern recognition. In the proposed method we have worked in hue-saturation-value (HSV) color model. The proposed method initially decomposes the V-plane of the input image into low and high frequency components using DCT; it then estimates the singular value matrix of the low-frequency component. After applying certain processing to improve visibility of the dark color image, it reconstructs the processed image by applying inverse DCT. Experimental results show that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods.

  3. Color harmonization for images

    NASA Astrophysics Data System (ADS)

    Tang, Zhen; Miao, Zhenjiang; Wan, Yanli; Wang, Zhifei

    2011-04-01

    Color harmonization is an artistic technique to adjust a set of colors in order to enhance their visual harmony so that they are aesthetically pleasing in terms of human visual perception. We present a new color harmonization method that treats the harmonization as a function optimization. For a given image, we derive a cost function based on the observation that pixels in a small window that have similar unharmonic hues should be harmonized with similar harmonic hues. By minimizing the cost function, we get a harmonized image in which the spatial coherence is preserved. A new matching function is proposed to select the best matching harmonic schemes, and a new component-based preharmonization strategy is proposed to preserve the hue distribution of the harmonized images. Our approach overcomes several shortcomings of the existing color harmonization methods. We test our algorithm with a variety of images to demonstrate the effectiveness of our approach.

  4. New adaptive clutter rejection based on spectral analysis for ultrasound color Doppler imaging: phantom and in vivo abdominal study.

    PubMed

    Geunyong Park; Sunmi Yeo; Jae Jin Lee; Changhan Yoon; Hyun-Woo Koh; Hyungjoon Lim; Youngtae Kim; Hwan Shim; Yangmo Yoo

    2014-01-01

    Effective rejection of time-varying clutter originating from slowly moving vessels and surrounding tissues is important for depicting hemodynamics in ultrasound color Doppler imaging (CDI). In this paper, a new adaptive clutter rejection method based on spectral analysis (ACR-SA) is presented for suppressing nonstationary clutter. In ACR-SA, tissue and flow characteristics are analyzed by singular value decomposition and tissue acceleration of backscattered Doppler signals to determine an appropriate clutter filter from a set of clutter filters. To evaluate the ACR-SA method, 20 frames of complex baseband data were acquired by a commercial ultrasound system equipped with a research package (Accuvix V10, Samsung Medison, Seoul, Korea) using a 3.5-MHz convex array probe by introducing tissue movements to the flow phantom (Gammex 1425 A LE, Gammex, Middleton, WI, USA). In addition, 20 frames of in vivo abdominal data from five volunteers were captured. From the phantom experiment, the ACR-SA method provided 2.43 dB (p <; 0.001) and 1.09 dB ( ) improvements in flow signal-to-clutter ratio (SCR) compared to static (STA) and down-mixing (ACR-DM) methods. Similarly, it showed smaller values in fractional residual clutter area (FRCA) compared to the STA and ACR-DM methods (i.e., 2.3% versus 5.4% and 3.7%, respectively, ). The consistent improvements in SCR from the proposed ACR-SA method were obtained with the in vivo abdominal data (i.e., 4.97 dB and 3.39 dB over STA and ACR-DM, respectively). The ACR-SA method showed less than 1% FRCA values for all in vivo abdominal data. These results indicate that the proposed ACR-SA method can improve image quality in CDI by providing enhanced rejection of nonstationary clutter.

  5. A fast algorithm for adaptive clutter rejection in ultrasound color flow imaging based on the first-order perturbation: a simulation study.

    PubMed

    You, Wei; Wang, Yuanyuan

    2010-08-01

    A fast clutter rejection method for ultrasound color flow imaging is proposed based on the first-order perturbation as an efficient implementation of eigen-decomposition. The proposed method is verified by simulated data. Results show that the proposed method can be adaptive to non-stationary clutter movements and its computational complexity is lower than that of the conventional eigen-based clutter rejection methods.

  6. Adaptive color visualization for dichromats using a customized hierarchical palette

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pardo, Carlos E.; Sharma, Gaurav

    2011-01-01

    We propose a user-centric methodology for displaying digital color documents, that optimizes color representations in an observer specific and adaptive fashion. We apply our framework to situations involving viewers with common dichromatic color vision deficiencies, who face challenges in perceiving information presented in color images and graphics designed for color normal individuals. For situations involving qualitative data visualization, we present a computationally efficient solution that combines a customized observer-specific hierarchical palette with "display time" selection of the number of colors to generate renderings with colors that are easily discriminated by the intended viewer. The palette design is accomplished via a clustering algorithm, that arranges colors in a hierarchical tree based on their perceived differences for the intended viewer. A desired number of highly discriminable colors are readily obtained from the hierarchical palette via a simple truncation. As an illustration, we demonstrate the application of the methodology to Ishihara style images.

  7. Image indexing using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2001-01-01

    A color correlogram is a three-dimensional table indexed by color and distance between pixels which expresses how the spatial correlation of color changes with distance in a stored image. The color correlogram may be used to distinguish an image from other images in a database. To create a color correlogram, the colors in the image are quantized into m color values, c.sub.i . . . c.sub.m. Also, the distance values k.epsilon.[d] to be used in the correlogram are determined where [d] is the set of distances between pixels in the image, and where dmax is the maximum distance measurement between pixels in the image. Each entry (i, j, k) in the table is the probability of finding a pixel of color c.sub.i at a selected distance k from a pixel of color c.sub.i. A color autocorrelogram, which is a restricted version of the color correlogram that considers color pairs of the form (i,i) only, may also be used to identify an image.

  8. Selection of small color palette for color image quantization

    NASA Astrophysics Data System (ADS)

    Chau, Wing K.; Wong, S. K. M.; Yang, Xuedong; Wan, Shijie J.

    1992-05-01

    Two issues are involved in color image quantization: color palette selection and color mapping. A common practice for color palette selection is to minimize the color distortion for each pixel (the median-cut, the variance-based and the k-means algorithms). After the color palette has been chosen, a quantized image may be generated by mapping the original color of each pixel onto its nearest color in the color palette. Such an approach can usually produce quantized images of high quality with 128 or more colors. For 32 - 64 colors, the quality of the quantized images is often acceptable with the aid of dithering techniques in the color mapping process. For 8 - 16 color, however, the above statistical method for color selection becomes no longer suitable because of the great reduction of color gamut. In order to preserve the color gamut of the original image, one may want to select the colors in such a way that the convex hull formed by these colors in the RGB color space encloses most colors of the original image. Quantized images generated in such a geometrical way usually preserve a lot of image details, but may contain too much high frequency noises. This paper presents an effective algorithm for the selection of very small color palette by combining the strengths of the above statistical and geometrical approaches. We demonstrate that with the new method images of high quality can be produced by using only 4 to 8 colors.

  9. Color adaptation induced from linguistic description of color

    PubMed Central

    Zheng, Liling; Huang, Ping; Zhong, Xiao; Li, Tianfeng; Mo, Lei

    2017-01-01

    Recent theories propose that language comprehension can influence perception at the low level of perceptual system. Here, we used an adaptation paradigm to test whether processing language caused color adaptation in the visual system. After prolonged exposure to a color linguistic context, which depicted red, green, or non-specific color scenes, participants immediately performed a color detection task, indicating whether they saw a green color square in the middle of a white screen or not. We found that participants were more likely to perceive the green color square after listening to discourses denoting red compared to discourses denoting green or conveying non-specific color information, revealing that language comprehension caused an adaptation aftereffect at the perceptual level. Therefore, semantic representation of color may have a common neural substrate with color perception. These results are in line with the simulation view of embodied language comprehension theory, which predicts that processing language reactivates the sensorimotor systems that are engaged during real experience. PMID:28358807

  10. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  11. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.

  12. Cydonia: Wide Angle Color Image

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site]

    Although the resolution of the MOC wide angle cameras is too low to tell much about the geomorphology of the Cydonia region, the images from the red and blue wide angle cameras provide us with two types of information that is of interest in their own right: color and stereoscopic data. Above are a color view and a stereoscopic anaglyph rendition of Geodesy Campaign images acquired by MGS MOC in May 1999. To view the stereo image, you need red/blue '3-D' glasses.

  13. Finding text in color images

    NASA Astrophysics Data System (ADS)

    Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga

    1998-04-01

    In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.

  14. Image subregion querying using color correlograms

    DOEpatents

    Huang, Jing; Kumar, Shanmugasundaram Ravi; Mitra, Mandar; Zhu, Wei-Jing

    2002-01-01

    A color correlogram (10) is a representation expressing the spatial correlation of color and distance between pixels in a stored image. The color correlogram (10) may be used to distinguish objects in an image as well as between images in a plurality of images. By intersecting a color correlogram of an image object with correlograms of images to be searched, those images which contain the objects are identified by the intersection correlogram.

  15. Spectrally-encoded color imaging

    PubMed Central

    Kang, DongKyun; Yelin, Dvir; Bouma, Brett E.; Tearney, Guillermo J.

    2010-01-01

    Spectrally-encoded endoscopy (SEE) is a technique for ultraminiature endoscopy that encodes each spatial location on the sample with a different wavelength. One limitation of previous incarnations of SEE is that it inherently creates monochromatic images, since the spectral bandwidth is expended in the spatial encoding process. Here we present a spectrally-encoded imaging system that has color imaging capability. The new imaging system utilizes three distinct red, green, and blue spectral bands that are configured to illuminate the grating at different incident angles. By careful selection of the incident angles, the three spectral bands can be made to overlap on the sample. To demonstrate the method, a bench-top system was built, comprising a 2400-lpmm grating illuminated by three 525-μm-diameter beams with three different spectral bands. Each spectral band had a bandwidth of 75 nm, producing 189 resolvable points. A resolution target, color phantoms, and excised swine small intestine were imaged to validate the system's performance. The color SEE system showed qualitatively and quantitatively similar color imaging performance to that of a conventional digital camera. PMID:19688002

  16. Color Image Denoising via Discriminatively Learned Iterative Shrinkage.

    PubMed

    Sun, Jian; Sun, Jian; Xu, Zingben

    2015-11-01

    In this paper, we propose a novel model, a discriminatively learned iterative shrinkage (DLIS) model, for color image denoising. The DLIS is a generalization of wavelet shrinkage by iteratively performing shrinkage over patch groups and whole image aggregation. We discriminatively learn the shrinkage functions and basis from the training pairs of noisy/noise-free images, which can adaptively handle different noise characteristics in luminance/chrominance channels, and the unknown structured noise in real-captured color images. Furthermore, to remove the splotchy real color noises, we design a Laplacian pyramid-based denoising framework to progressively recover the clean image from the coarsest scale to the finest scale by the DLIS model learned from the real color noises. Experiments show that our proposed approach can achieve the state-of-the-art denoising results on both synthetic denoising benchmark and real-captured color images.

  17. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  18. Preparing Colorful Astronomical Images II

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.; Frattare, L. M.

    2002-12-01

    We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.

  19. Cats: Optical to Near-Infrared Colors of the Bulge and Disk of Two z = 0.7 Galaxies Using Hubble Space Telescope and Keck Laser Adaptive Optics Imaging

    NASA Astrophysics Data System (ADS)

    Steinbring, E.; Melbourne, J.; Metevier, A. J.; Koo, D. C.; Chun, M. R.; Simard, L.; Larkin, J. E.; Max, C. E.

    2008-10-01

    We have employed laser guide star (LGS) adaptive optics (AO) on the Keck II telescope to obtain near-infrared (NIR) images in the Extended Groth Strip deep galaxy survey field. This is a continuation of our Center for Adaptive Optics Treasury Survey program of targeting 0.5 < z < 1 galaxies where existing images with the Hubble Space Telescope (HST) are already in hand. Our AO field has already been imaged by the Advanced Camera for Surveys and the Near Infrared Camera and Multiobject Spectrograph (NICMOS). Our AO images at 2.2 μm (K') are comparable in depth to those from the HST, have Strehl ratios up to 0.4, and full width at half-maximum resolutions superior to that from NICMOS. By sampling the field with the LGS at different positions, we obtain better quality AO images than with an immovable natural guide star. As examples of the power of adding LGS AO to HST data, we study the optical to NIR colors and color gradients of the bulge and disk of two galaxies in the field with z = 0.7. All authors except L.S. are affiliated with the Center for Adaptive Optics.

  20. Enriching text with images and colored light

    NASA Astrophysics Data System (ADS)

    Sekulovski, Dragan; Geleijnse, Gijs; Kater, Bram; Korst, Jan; Pauws, Steffen; Clout, Ramon

    2008-01-01

    We present an unsupervised method to enrich textual applications with relevant images and colors. The images are collected by querying large image repositories and subsequently the colors are computed using image processing. A prototype system based on this method is presented where the method is applied to song lyrics. In combination with a lyrics synchronization algorithm the system produces a rich multimedia experience. In order to identify terms within the text that may be associated with images and colors, we select noun phrases using a part of speech tagger. Large image repositories are queried with these terms. Per term representative colors are extracted using the collected images. Hereto, we either use a histogram-based or a mean shift-based algorithm. The representative color extraction uses the non-uniform distribution of the colors found in the large repositories. The images that are ranked best by the search engine are displayed on a screen, while the extracted representative colors are rendered on controllable lighting devices in the living room. We evaluate our method by comparing the computed colors to standard color representations of a set of English color terms. A second evaluation focuses on the distance in color between a queried term in English and its translation in a foreign language. Based on results from three sets of terms, a measure of suitability of a term for color extraction based on KL Divergence is proposed. Finally, we compare the performance of the algorithm using either the automatically indexed repository of Google Images and the manually annotated Flickr.com. Based on the results of these experiments, we conclude that using the presented method we can compute the relevant color for a term using a large image repository and image processing.

  1. Image color reduction method for color-defective observers using a color palette composed of 20 particular colors

    NASA Astrophysics Data System (ADS)

    Sakamoto, Takashi

    2015-01-01

    This study describes a color enhancement method that uses a color palette especially designed for protan and deutan defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color mapping. Complicated computation and image processing are not required by using the proposed method, and the method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors. Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced. After the reduction process is completed, color-defective observers can distinguish these confusion colors.

  2. Color image projection through a strongly scattering wall.

    PubMed

    Conkey, Donald B; Piestun, Rafael

    2012-12-03

    We present multi-color image projection through highly scattering media for image formation without need of reconstruction. We overcome the fundamental limitations to the transmission of visual information imposed by multiple scattering phenomena via multi-parametric adaptive wavefront modulation that takes into account the scattering properties of the medium. In order to evaluate the wavefront modulation required for a specific image formation we implement a global optimization via a genetic algorithm. We create color images by diffraction and multiple scattering effects as well as via RGB demosaicing.

  3. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  4. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  5. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  6. Nonlinear color-image decomposition for image processing of a digital color camera

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi

    2009-01-01

    This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.

  7. Variational exemplar-based image colorization.

    PubMed

    Bugeau, Aurélie; Ta, Vinh-Thong; Papadakis, Nicolas

    2014-01-01

    In this paper, we address the problem of recovering a color image from a grayscale one. The input color data comes from a source image considered as a reference image. Reconstructing the missing color of a grayscale pixel is here viewed as the problem of automatically selecting the best color among a set of color candidates while simultaneously ensuring the local spatial coherency of the reconstructed color information. To solve this problem, we propose a variational approach where a specific energy is designed to model the color selection and the spatial constraint problems simultaneously. The contributions of this paper are twofold. First, we introduce a variational formulation modeling the color selection problem under spatial constraints and propose a minimization scheme, which computes a local minima of the defined nonconvex energy. Second, we combine different patch-based features and distances in order to construct a consistent set of possible color candidates. This set is used as input data and our energy minimization automatically selectsthe best color to transfer for each pixel of the grayscale image. Finally, the experiments illustrate the potentiality of our simple methodology and show that our results are very competitive with respect to the state-of-the-art methods.

  8. Image-based color ink diffusion rendering.

    PubMed

    Wang, Chung-Ming; Wang, Ren-Jie

    2007-01-01

    This paper proposes an image-based painterly rendering algorithm for automatically synthesizing an image with color ink diffusion. We suggest a mathematical model with a physical base to simulate the phenomenon of color colloidal ink diffusing into absorbent paper. Our algorithm contains three main parts: a feature extraction phase, a Kubelka-Munk (KM) color mixing phase, and a color ink diffusion synthesis phase. In the feature extraction phase, the information of the reference image is simplified by luminance division and color segmentation. In the color mixing phase, the KM theory is employed to approximate the result when one pigment is painted upon another pigment layer. Then, in the color ink diffusion synthesis phase, the physically-based model that we propose is employed to simulate the result of color ink diffusion in absorbent paper using a texture synthesis technique. Our image-based ink diffusing rendering (IBCIDR) algorithm eliminates the drawback of conventional Chinese ink simulations, which are limited to the black ink domain, and our approach demonstrates that, without using any strokes, a color image can be automatically converted to the diffused ink style with a visually pleasing appearance.

  9. A Color Image Edge Detection Algorithm Based on Color Difference

    NASA Astrophysics Data System (ADS)

    Zhuo, Li; Hu, Xiaochen; Jiang, Liying; Zhang, Jing

    2016-12-01

    Although image edge detection algorithms have been widely applied in image processing, the existing algorithms still face two important problems. On one hand, to restrain the interference of noise, smoothing filters are generally exploited in the existing algorithms, resulting in loss of significant edges. On the other hand, since the existing algorithms are sensitive to noise, many noisy edges are usually detected, which will disturb the subsequent processing. Therefore, a color image edge detection algorithm based on color difference is proposed in this paper. Firstly, a new operation called color separation is defined in this paper, which can reflect the information of color difference. Then, for the neighborhood of each pixel, color separations are calculated in four different directions to detect the edges. Experimental results on natural and synthetic images show that the proposed algorithm can remove a large number of noisy edges and be robust to the smoothing filters. Furthermore, the proposed edge detection algorithm is applied in road foreground segmentation and shadow removal, which achieves good performances.

  10. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  11. Adaptive Local Linear Regression with Application to Printer Color Management

    DTIC Science & Technology

    2008-01-01

    values formed the test samples. This process guaranteed that the CIELAB test samples were in the gamut for each printer, but each printer had a...digital images has recently led to increased consumer demand for accurate color reproduction. Given a CIELAB color one would like to reproduce, the color...management problem is to determine what RGB color one must send the printer to minimize the error between the desired CIELAB color and the CIELAB

  12. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  13. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation.

    PubMed

    Hillis, James M; Brainard, David H

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  14. Color filter array demosaicing: an adaptive progressive interpolation based on the edge type

    NASA Astrophysics Data System (ADS)

    Dong, Qiqi; Liu, Zhaohui

    2015-10-01

    Color filter array (CFA) is one of the key points for single-sensor digital cameras to produce color images. Bayer CFA is the most commonly used pattern. In this array structure, the sampling frequency of green is two times of red or blue, which is consistent with the sensitivity of human eyes to colors. However, each sensor pixel only samples one of three primary color values. To render a full-color image, an interpolation process, commonly referred to CFA demosaicing, is required to estimate the other two missing color values at each pixel. In this paper, we explore an adaptive progressive interpolation based on the edge type algorithm. The proposed demosaicing method consists of two successive steps: an interpolation step that estimates missing color values according to various edges and a post-processing step by iterative interpolation.

  15. Statistical pressure snakes based on color images.

    SciTech Connect

    Schaub, Hanspeter

    2004-05-01

    The traditional mono-color statistical pressure snake was modified to function on a color image with target errors defined in HSV color space. Large variations in target lighting and shading are permitted if the target color is only specified in terms of hue. This method works well with custom targets where the target is surrounded by a color of a very different hue. A significant robustness increase is achieved in the computer vision capability to track a specific target in an unstructured, outdoor environment. By specifying the target color to contain hue, saturation and intensity values, it is possible to establish a reasonably robust method to track general image features of a single color. This method is convenient to allow the operator to select arbitrary targets, or sections of a target, which have a common color. Further, a modification to the standard pixel averaging routine is introduced which allows the target to be specified not only in terms of a single color, but also using a list of colors. These algorithms were tested and verified by using a web camera attached to a personal computer.

  16. Color Image Segmentation in a Quaternion Framework

    PubMed Central

    Subakan, Özlem N.; Vemuri, Baba C.

    2010-01-01

    In this paper, we present a feature/detail preserving color image segmentation framework using Hamiltonian quaternions. First, we introduce a novel Quaternionic Gabor Filter (QGF) which can combine the color channels and the orientations in the image plane. Using the QGFs, we extract the local orientation information in the color images. Second, in order to model this derived orientation information, we propose a continuous mixture of appropriate hypercomplex exponential basis functions. We derive a closed form solution for this continuous mixture model. This analytic solution is in the form of a spatially varying kernel which, when convolved with the signed distance function of an evolving contour (placed in the color image), yields a detail preserving segmentation. PMID:21243101

  17. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  18. Adaptive color rendering of maps for users with color vision deficiencies

    NASA Astrophysics Data System (ADS)

    Kvitle, Anne Kristin; Green, Phil; Nussbaum, Peter

    2015-01-01

    A map is an information design object for which canonical colors for the most common elements are well established. For a CVD observer, it may be difficult to discriminate between such elements - for example, it may be hard to distinguish a red road from a green landscape on the basis of color alone. We address this problem through an adaptive color schema in which the conspicuity of elements in a map to the individual user is maximized. This paper outlines a method to perform adaptive color rendering of map information for users with color vision deficiencies. The palette selection method is based on a pseudo-color palette generation technique which constrains colors to those which lie on the boundary of a reference object color gamut. A user performs a color vision discrimination task, and based on the results of the test, a palette of colors is selected using the pseudo-color palette generation method. This ensures that the perceived difference between palette elements is high but which retains the canonical color of well-known elements as far as possible. We show examples of color palettes computed for a selection of normal and CVD observers, together with maps rendered using these palettes.

  19. Embedding color watermarks in color images based on Schur decomposition

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a blind dual color image watermarking scheme based on Schur decomposition is introduced. This is the first time to use Schur decomposition to embed color image watermark in color host image, which is different from using the binary image as watermark. By analyzing the 4 × 4 unitary matrix U via Schur decomposition, we can find that there is a strong correlation between the second row first column element and the third row first column element. This property can be explored for embedding watermark and extracting watermark in the blind manner. Since Schur decomposition is an intermediate step in SVD decomposition, the proposed method requires less number of computations. Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al. Moreover, the proposed algorithm outperforms the closely related SVD-based algorithm and the spatial-domain algorithm.

  20. How Phoenix Creates Color Images (Animation)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This simple animation shows how a color image is made from images taken by Phoenix.

    The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists.

    By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  1. Adaptation of spherical harmonic transform for color shape reconstruction and retrieval using quaternion algebra

    NASA Astrophysics Data System (ADS)

    Dad, Nisrine; En-Nahnahi, Noureddine; Ouatik, Said El Alaoui; Oumsis, Mohammed

    2016-09-01

    A set of invariant quaternion moments based on an adaptation of the three-dimensional (3-D) spherical harmonic transform (SHT) for describing two-dimensional color shapes is proposed. The use of quaternions to deal with the color part is beneficial in the way the three color components are integrated in a single feature. An adequate mapping from the 3-D SHT to the unit disc allows a fast and accurate computation of the proposed moments. Experiments are conducted to evaluate the performance of the obtained moments in terms of color image reconstruction, robustness to geometric and photometric transformations, content-based color shape retrieval, and computation time. For this purpose, two image databases (COIL-100 and ALOI) are used. Results illustrate the effectiveness of the proposed moments in dealing with the color information.

  2. Color image fusion for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Toet, Alexander

    2003-09-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.

  3. Functional photoreceptor loss revealed with adaptive optics: an alternate cause of color blindness.

    PubMed

    Carroll, Joseph; Neitz, Maureen; Hofer, Heidi; Neitz, Jay; Williams, David R

    2004-06-01

    There is enormous variation in the X-linked L/M (long/middle wavelength sensitive) gene array underlying "normal" color vision in humans. This variability has been shown to underlie individual variation in color matching behavior. Recently, red-green color blindness has also been shown to be associated with distinctly different genotypes. This has opened the possibility that there may be important phenotypic differences within classically defined groups of color blind individuals. Here, adaptive optics retinal imaging has revealed a mechanism for producing dichromatic color vision in which the expression of a mutant cone photopigment gene leads to the loss of the entire corresponding class of cone photoreceptor cells. Previously, the theory that common forms of inherited color blindness could be caused by the loss of photoreceptor cells had been discounted. We confirm that remarkably, this loss of one-third of the cones does not impair any aspect of vision other than color.

  4. Sparse Representation for Color Image Restoration (PREPRINT)

    DTIC Science & Technology

    2006-10-01

    learning dictionaries for color images and extend the K- SVD -based grayscale image denoising algorithm that appears in [2]. This work puts forward...extend the K- SVD -based gray- scale image denoising algorithm that appears in [2]. This work puts forward ways for handling non- homogeneous noise and...brief description of the K- SVD -based gray-scale image denoising algorithm as proposed in [2]. Section 4 describes the novelties offered in this paper

  5. Adaptive wiener image restoration kernel

    SciTech Connect

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  6. Image mosaic with color and brightness correction

    NASA Astrophysics Data System (ADS)

    Zhao, Yili; Xu, Dan; Pan, Zhigeng

    2004-03-01

    Image mosaic is comprised of building a large field of view from a sequence of smaller images. It can be performed by registering, projective warping, resampling and compositing a serials of images. Due to the many possible factors for color and brightness variations when taking images, it is possible to lead to misalignment and obtain poor stitching result. Despite image mosaic can be manually adjusted using some photo editors like PhotoShop, this is not only tedious but also requires skills, knowledge and experience. Automatic adjustment is therefore desirable. By converting images to lαβ space and applying a special statistical analysis, color and brightness correction can be done automatically and improved image mosaic can be obtained.

  7. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural

  8. A robust color image fusion for low light level and infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang

    2016-09-01

    The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.

  9. New approach of color image quantization based on multidimensional directory

    NASA Astrophysics Data System (ADS)

    Chang, Chin-Chen; Su, Yuan-Yuan

    2003-04-01

    Color image quantization is a strategy in which a smaller number of colors are used to represent the image. The objective is to make the quality approximate as closely to the original true-color image. The technology is widely used in non-true-color displays and in color printers that cannot reproduce a large number of different colors. However, the main problem the quantization of color image has to face is how to use less colors to show the color image. Therefore, it is very important to choose one suitable palette for an index color image. In this paper, we shall propose a new approach which employs the concept of Multi-Dimensional Directory (MDD) together with the one cycle LBG algorithm to create a high-quality index color image. Compared with the approaches such as VQ, ISQ, and Photoshop v.5, our approach can not only acquire high quality image but also shorten the operation time.

  10. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  11. Paper roughness and the color gamut of color laser images

    NASA Astrophysics Data System (ADS)

    Arney, J. S.; Spampata, Michelle; Farnand, Susan; Oswald, Tom; Chauvin, Jim

    2007-01-01

    Common experience indicates the quality of a printed image depends on the choice of the paper used in the printing process. In the current report, we have used a recently developed device called a micro-goniophotometer to examine toner on a variety of substrates fused to varying degrees. The results indicate that the relationship between the printed color gamut and the topography of the substrate paper is a simple one for a color electrophotographic process. If the toner is fused completely to an equilibrium state with the substrate paper, then the toner conforms to the overall topographic features of the substrate. For rougher papers, the steeper topographic features are smoothed out by the toner. The maximum achievable color gamut is limited by the topographic smoothness of the resulting fused surface. Of course, achieving a fully fused surface at a competitive printing rate with a minimum of power consumption is not always feasible. However, the only significant factor found to limit the maximum state of fusing and the ultimate achievable color gamut is the smoothness of the paper.

  12. Color constancy and the natural image

    NASA Technical Reports Server (NTRS)

    Wandall, Brian A.

    1989-01-01

    Color vision is useful only if it is possible to identify an object's color across many viewing contexts. Here, consideration is given to recent results on how to estimate the surface reflectance function of an object from image data, despite (1) uncertainty in the spectral power distribution of the ambient lighting, and (2) uncertainty about what other surfaces will be in the field of view.

  13. Image query based on color harmony

    NASA Astrophysics Data System (ADS)

    Vasile, Alexandru; Bender, Walter R.

    2001-06-01

    The combination of the increased size of digital image databases and the increased frequency with which non- specialist access these databases is raising the question of the efficacy of visual search and retrieval tools. We hypothesize that the use of color harmony has the potential for improving image-search efficiency. We describe an image- retrieval algorithm that relies on a color harmony model. This mode, built on Munsell hue, value, and chroma contrast, is used to divide the image database into clusters that can be individually searched. To test the efficacy of the algorithm, it is compared to existing algorithms developed by Niblack et al and Feldman et al. A second study that utilizes the image query system in a retail application is also described.

  14. Calibration Image of Earth by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the NASA spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of color and ultraviolet images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This image shows a color composite view of Mars Color Imager's image of Earth. As expected, it covers only five pixels. This color view has been enlarged five times. The Sun was illuminating our planet from the left, thus only one quarter of Earth is seen from this perspective. North America was in daylight and facing toward the camera at the time the picture was taken; the data

  15. Utilizing typical color appearance models to represent perceptual brightness and colorfulness for digital images

    NASA Astrophysics Data System (ADS)

    Gong, Rui; Wang, Qing; Shao, Xiaopeng; Zhou, Conghao

    2016-12-01

    This study aims to expand the applications of color appearance models to representing the perceptual attributes for digital images, which supplies more accurate methods for predicting image brightness and image colorfulness. Two typical models, i.e., the CIELAB model and the CIECAM02, were involved in developing algorithms to predict brightness and colorfulness for various images, in which three methods were designed to handle pixels of different color contents. Moreover, massive visual data were collected from psychophysical experiments on two mobile displays under three lighting conditions to analyze the characteristics of visual perception on these two attributes and to test the prediction accuracy of each algorithm. Afterward, detailed analyses revealed that image brightness and image colorfulness were predicted well by calculating the CIECAM02 parameters of lightness and chroma; thus, the suitable methods for dealing with different color pixels were determined for image brightness and image colorfulness, respectively. This study supplies an example of enlarging color appearance models to describe image perception.

  16. Objective color harmony assessment for visible and infrared color fusion images of typical scenes

    NASA Astrophysics Data System (ADS)

    Gao, Shaoshu; Jin, Weiqi; Wang, Lingxue

    2012-11-01

    For visible and infrared color fusion images of three typical scenes, color harmony computational models are proposed to evaluate the color quality of fusion images without reference images. The models are established based on the color-combination harmony model and focus on the influence of the color characteristics of typical scenes and the color region sizes in the fusion image. For the influence of the color characteristics of typical scenes, color harmony adjusting factors for natural scene images (green plants, sea, and sky) are defined by measuring the similarity between image colors and corresponding memory colors, and that for town and building images are presented based on the optimum colorfulness range suited for a human. Simultaneously, considering the influence of color region sizes, the weight coefficients are established using areas of the color regions to optimize the color harmony model. Experimental results show that the proposed harmony models are consistent with human perception and that they are suitable to evaluate the color harmony for color fusion images of typical scenes.

  17. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  18. Preparing Colorful Astronomical Images and Illustrations

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.; Frattare, L. M.

    2001-12-01

    We present techniques for using mainstream graphics software, specifically Adobe Photoshop and Illustrator, for producing composite color images and illustrations from astronomical data. These techniques have been used with numerous images from the Hubble Space Telescope to produce printed and web-based news, education and public presentation products as well as illustrations for technical publication. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels. These features, along with its user-oriented, visual interface, provide convenient tools to produce high-quality, full-color images and graphics for printed and on-line publication and presentation.

  19. Perceived image quality assessment for color images on mobile displays

    NASA Astrophysics Data System (ADS)

    Jang, Hyesung; Kim, Choon-Woo

    2015-01-01

    With increase in size and resolution of mobile displays and advances in embedded processors for image enhancement, perceived quality of images on mobile displays has been drastically improved. This paper presents a quantitative method to evaluate perceived image quality of color images on mobile displays. Three image quality attributes, colorfulness, contrast and brightness, are chosen to represent perceived image quality. Image quality assessment models are constructed based on results of human visual experiments. In this paper, three phase human visual experiments are designed to achieve credible outcomes while reducing time and resources needed for visual experiments. Values of parameters of image quality assessment models are estimated based on results from human visual experiments. Performances of different image quality assessment models are compared.

  20. Retinal Imaging: Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Goncharov, A. S.; Iroshnikov, N. G.; Larichev, Andrey V.

    This chapter describes several factors influencing the performance of ophthalmic diagnostic systems with adaptive optics compensation of human eye aberration. Particular attention is paid to speckle modulation, temporal behavior of aberrations, and anisoplanatic effects. The implementation of a fundus camera with adaptive optics is considered.

  1. The adaptive value of primate color vision for predator detection.

    PubMed

    Pessoa, Daniel Marques Almeida; Maia, Rafael; de Albuquerque Ajuz, Rafael Cavalcanti; De Moraes, Pedro Zurvaino Palmeira Melo Rosa; Spyrides, Maria Helena Constantino; Pessoa, Valdir Filgueiras

    2014-08-01

    The complex evolution of primate color vision has puzzled biologists for decades. Primates are the only eutherian mammals that evolved an enhanced capacity for discriminating colors in the green-red part of the spectrum (trichromatism). However, while Old World primates present three types of cone pigments and are routinely trichromatic, most New World primates exhibit a color vision polymorphism, characterized by the occurrence of trichromatic and dichromatic females and obligatory dichromatic males. Even though this has stimulated a prolific line of inquiry, the selective forces and relative benefits influencing color vision evolution in primates are still under debate, with current explanations focusing almost exclusively at the advantages in finding food and detecting socio-sexual signals. Here, we evaluate a previously untested possibility, the adaptive value of primate color vision for predator detection. By combining color vision modeling data on New World and Old World primates, as well as behavioral information from human subjects, we demonstrate that primates exhibiting better color discrimination (trichromats) excel those displaying poorer color visions (dichromats) at detecting carnivoran predators against the green foliage background. The distribution of color vision found in extant anthropoid primates agrees with our results, and may be explained by the advantages of trichromats and dichromats in detecting predators and insects, respectively.

  2. Color and depth priors in natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2013-06-01

    Natural scene statistics have played an increasingly important role in both our understanding of the function and evolution of the human vision system, and in the development of modern image processing applications. Because range (egocentric distance) is arguably the most important thing a visual system must compute (from an evolutionary perspective), the joint statistics between image information (color and luminance) and range information are of particular interest. It seems obvious that where there is a depth discontinuity, there must be a higher probability of a brightness or color discontinuity too. This is true, but the more interesting case is in the other direction--because image information is much more easily computed than range information, the key conditional probabilities are those of finding a range discontinuity given an image discontinuity. Here, the intuition is much weaker; the plethora of shadows and textures in the natural environment imply that many image discontinuities must exist without corresponding changes in range. In this paper, we extend previous work in two ways--we use as our starting point a very high quality data set of coregistered color and range values collected specifically for this purpose, and we evaluate the statistics of perceptually relevant chromatic information in addition to luminance, range, and binocular disparity information. The most fundamental finding is that the probabilities of finding range changes do in fact depend in a useful and systematic way on color and luminance changes; larger range changes are associated with larger image changes. Second, we are able to parametrically model the prior marginal and conditional distributions of luminance, color, range, and (computed) binocular disparity. Finally, we provide a proof of principle that this information is useful by showing that our distribution models improve the performance of a Bayesian stereo algorithm on an independent set of input images. To summarize

  3. Image Transform Based on the Distribution of Representative Colors for Color Deficient

    NASA Astrophysics Data System (ADS)

    Ohata, Fukashi; Kudo, Hiroaki; Matsumoto, Tetsuya; Takeuchi, Yoshinori; Ohnishi, Noboru

    This paper proposes the method to convert digital image containing distinguishing difficulty sets of colors into the image with high visibility. We set up four criteria, automatically processing by a computer, retaining continuity in color space, not making images into lower visible for people with normal color vision, and not making images not originally having distinguishing difficulty sets of colors into lower visible. We conducted the psychological experiment. We obtained the result that the visibility of a converted image had been improved at 60% for 40 images, and we confirmed the main criterion of the continuity in color space was kept.

  4. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2009-02-24

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  5. Scannerless loss modulated flash color range imaging

    DOEpatents

    Sandusky, John V.; Pitts, Todd Alan

    2008-09-02

    Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.

  6. Improvements to Color HRSC+OMEGA Image Mosaics of Mars

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Audouard, J.; Dumke, A.; Dunker, T.; Gross, C.; Kneissl, T.; Michael, G.; Ody, A.; Poulet, F.; Schreiner, B.; van Gasselt, S.; Walter, S. H. G.; Wendt, L.; Zuschneid, W.

    2015-10-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express (MEx) orbiter has acquired 3640 images (with 'preliminary level 4' processing as described in [1]) of the Martian surface since arriving in orbit in 2003, covering over 90% of the planet [2]. At resolutions that can reach 10 meters/pixel, these MEx/HRSC images [3-4] are constructed in a pushbroom manner from 9 different CCD line sensors, including a panchromatic nadir-looking (Pan) channel, 4 color channels (R, G, B, IR), and 4 other panchromatic channels for stereo imaging or photometric imaging. In [5], we discussed our first approach towards mosaicking hundreds of the MEx/HRSC RGB or Pan images together. The images were acquired under different atmospheric conditions over the entire mission and under different observation/illumination geometries. Therefore, the main challenge that we have addressed is the color (or gray-scale) matching of these images, which have varying colors (or gray scales) due to the different observing conditions. Using this first approach, our best results for a semiglobal mosaic consist of adding a high-pass-filtered version of the HRSC mosaic to a low-pass-filtered version of the MEx/OMEGA [6] global mosaic. Herein, we will present our latest results using a new, improved, second approach for mosaicking MEx/HRSC images [7], but focusing on the RGB Color processing when using this new second approach. Currently, when the new second approach is applied to Pan images, we match local spatial averages of the Pan images to the local spatial averages of a mosaic made from the images acquired by the Mars Global Surveyor TES bolometer. Since these MGS/TES images have already been atmospherically-corrected, this matching allows us to bootstrap the process of mosaicking the HRSC images without actually atmospherically correcting the HRSC images. In this work, we will adapt this technique of MEx/HRSC Pan images being matched with the MGS/TES mosaic, so that instead, MEx/HRSC RGB images

  7. Stereo matching image processing by synthesized color and the characteristic area by the synthesized color

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo

    2014-09-01

    We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.

  8. Lightness modification of color image for protanopia and deuteranopia

    NASA Astrophysics Data System (ADS)

    Tanaka, Go; Suetake, Noriaki; Uchino, Eiji

    2010-01-01

    In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.

  9. The Artist, the Color Copier, and Digital Imaging.

    ERIC Educational Resources Information Center

    Witte, Mary Stieglitz

    The impact that color-copying technology and digital imaging have had on art, photography, and design are explored. Color copiers have provided new opportunities for direct and spontaneous image making an the potential for new transformations in art. The current generation of digital color copiers permits new directions in imaging, but the…

  10. A Plenoptic Multi-Color Imaging Pyrometer

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Hutchins, William D.; Fahringer, Timothy; Thurow, Brian S.

    2017-01-01

    A three-color pyrometer has been developed based on plenoptic imaging technology. Three bandpass filters placed in front of a camera lens allow separate 2D images to be obtained on a single image sensor at three different and adjustable wavelengths selected by the user. Images were obtained of different black- or grey-bodies including a calibration furnace, a radiation heater, and a luminous sulfur match flame. The images obtained of the calibration furnace and radiation heater were processed to determine 2D temperature distributions. Calibration results in the furnace showed that the instrument can measure temperature with an accuracy and precision of 10 Kelvins between 1100 and 1350 K. Time-resolved 2D temperature measurements of the radiation heater are shown.

  11. Novel wavelet coder for color image compression

    NASA Astrophysics Data System (ADS)

    Wang, Houng-Jyh M.; Kuo, C.-C. Jay

    1997-10-01

    A new still image compression algorithm based on the multi-threshold wavelet coding (MTWC) technique is proposed in this work. It is an embedded wavelet coder in the sense that its compression ratio can be controlled depending on the bandwidth requirement of image transmission. At low bite rates, MTWC can avoid the blocking artifact from JPEG to result in a better reconstructed image quality. An subband decision scheme is developed based on the rate-distortion theory to enhance the image fidelity. Moreover, a new quantization sequence order is introduced based on our analysis of error energy reduction in significant and refinement maps. Experimental results are given to demonstrate the superior performance of the proposed new algorithm in its high reconstructed quality for color and gray level image compression and low computational complexity. Generally speaking, it gives a better rate- distortion tradeoff and performs faster than most existing state-of-the-art wavelet coders.

  12. Screening Diabetic Retinopathy Through Color Retinal Images

    NASA Astrophysics Data System (ADS)

    Li, Qin; Jin, Xue-Min; Gao, Quan-Xue; You, Jane; Bhattacharya, Prabir

    Diabetic Retinopathy (DR) is a common complication of diabetes that damages the eye's retina. Recognition DR as early as possible is very important to protect patients' vision. We propose a method for screening DR and distin-guishing Prolifetive Diabetic Retinopathy (PDR) from Non-Prolifetive Retino-pathy (NPDR) automatatically through color retinal images. This method evaluates the severity of DR by analyzing the appearnce of bright lesions and retinal vessel patterns. The bright lesions are extracted through morphlogical re-consturction. After that, the retinal vessels are automatically extracted using multiscale matched filters. Then the vessel patterns are analyzed by extracting the vessel net density. The experimental results domonstrate that it is a effective solution to screen DR and distinguish PDR from NPDR by only using color retinal images.

  13. Extremely simple holographic projection of color images

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej

    2012-03-01

    A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).

  14. Responding to color: the regulation of complementary chromatic adaptation.

    PubMed

    Kehoe, David M; Gutu, Andrian

    2006-01-01

    The acclimation of photosynthetic organisms to changes in light color is ubiquitous and may be best illustrated by the colorful process of complementary chromatic adaptation (CCA). During CCA, cyanobacterial cells change from brick red to bright blue green, depending on their light color environment. The apparent simplicity of this spectacular, photoreversible event belies the complexity of the cellular response to changes in light color. Recent results have shown that the regulation of CCA is also complex and involves at least three pathways. One is controlled by a phytochrome-class photoreceptor that is responsive to green and red light and a complex two-component signal transduction pathway, whereas another is based on sensing redox state. Studies of CCA are uncovering the strategies used by photosynthetic organisms during light acclimation and the means by which they regulate these responses.

  15. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  16. Toward optimal color image quality of television display

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.; Endrikhovski, Sergej N.; Bech, Soren; Jensen, Kaj

    1999-12-01

    A general framework and first experimental results are presented for the `OPTimal IMage Appearance' (OPTIMA) project, which aims to develop a computational model for achieving optimal color appearance of natural images on adaptive CRT television displays. To achieve this goal we considered the perceptual constraints determining quality of displayed images and how they could be quantified. The practical value of the notion of optimal image appearance was translated from the high level of the perceptual constraints into a method for setting the display's parameters at the physical level. In general, the whole framework of quality determination includes: (1) evaluation of perceived quality; (2) evaluation of the individual perceptual attributes; and (3) correlation between the physical measurements, psychometric parameters and the subjective responses. We performed a series of psychophysical experiments, with observers viewing a series of color images on a high-end consumer television display, to investigate the relationships between Overall Image Quality and four quality-related attributes: Brightness Rendering, Chromatic Rendering, Visibility of Details and Overall Naturalness. The results of the experiments presented in this paper suggest that these attributes are highly inter-correlated.

  17. Diamond color measurement instrument based on image processing

    NASA Astrophysics Data System (ADS)

    Takahashi, H.; Mandal, S.; Toosi, M.; Zeng, J.; Wang, W.

    2016-09-01

    Gemological Institute of America (GIA) has developed a diamond color measurement instrument that can provide accurate and reproducible color measurement results. The instrument uses uniform illumination by a daylight-approximating light source; observations from a high-resolution color-camera with nearly zero-distortion bi-telecentric lens, and image processing to calculate color parameters of diamonds. Experiments show the instrument can provide reproducible color measurement results and also identify subtle color differences in diamonds with high sensitivity. The experimental setup of the prototype instrument and the image processing method for calculating diamond color parameters are presented in this report.

  18. Structure preserving color deconvolution for immunohistochemistry images

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Srinivas, Chukka

    2015-03-01

    Immunohistochemistry (IHC) staining is an important technique for the detection of one or more biomarkers within a single tissue section. In digital pathology applications, the correct unmixing of the tissue image into its individual constituent dyes for each biomarker is a prerequisite for accurate detection and identification of the underlying cellular structures. A popular technique thus far is the color deconvolution method1 proposed by Ruifrok et al. However, Ruifrok's method independently estimates the individual dye contributions at each pixel which potentially leads to "holes and cracks" in the cells in the unmixed images. This is clearly inadequate since strong spatial dependencies exist in the tissue images which contain rich cellular structures. In this paper, we formulate the unmixing algorithm into a least-square framework of image patches, and propose a novel color deconvolution method which explicitly incorporates the spatial smoothness and structure continuity constraint into a neighborhood graph regularizer. An analytical closed-form solution to the cost function is derived for this algorithm for fast implementation. The algorithm is evaluated on a clinical data set containing a number of 3,3-Diaminobenzidine (DAB) and hematoxylin (HTX) stained IHC slides and demonstrates better unmixing results than the existing strategy.

  19. A dendritic lattice neural network for color image segmentation

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Lara-Rodríguez, Luis David; López-Meléndez, Elizabeth

    2015-09-01

    A two-layer dendritic lattice neural network is proposed to segment color images in the Red-Green-Blue (RGB) color space. The two layer neural network is a fully interconnected feed forward net consisting of an input layer that receives color pixel values, an intermediate layer that computes pixel interdistances, and an output layer used to classify colors by hetero-association. The two-layer net is first initialized with a finite small subset of the colors present in the input image. These colors are obtained by means of an automatic clustering procedure such as k-means or fuzzy c-means. In the second stage, the color image is scanned on a pixel by pixel basis where each picture element is treated as a vector and feeded into the network. For illustration purposes we use public domain color images to show the performance of our proposed image segmentation technique.

  20. Color enhancement in multispectral image of human skin

    NASA Astrophysics Data System (ADS)

    Mitsui, Masanori; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2003-07-01

    Multispectral imaging is receiving attention in medical color imaging, as high-fidelity color information can be acquired by the multispectral image capturing. On the other hand, as color enhancement in medical color image is effective for distinguishing lesion from normal part, we apply a new technique for color enhancement using multispectral image to enhance the features contained in a certain spectral band, without changing the average color distribution of original image. In this method, to keep the average color distribution, KL transform is applied to spectral data, and only high-order KL coefficients are amplified in the enhancement. Multispectral images of human skin of bruised arm are captured by 16-band multispectral camera, and the proposed color enhancement is applied. The resultant images are compared with the color images reproduced assuming CIE D65 illuminant (obtained by natural color reproduction technique). As a result, the proposed technique successfully visualizes unclear bruised lesions, which are almost invisible in natural color images. The proposed technique will provide support tool for the diagnosis in dermatology, visual examination in internal medicine, nursing care for preventing bedsore, and so on.

  1. Adaptive and accurate color edge extraction method for one-shot shape acquisition

    NASA Astrophysics Data System (ADS)

    Yin, Wei; Cheng, Xiaosheng; Cui, Haihua; Li, Dawei; Zhou, Lei

    2016-09-01

    This paper presents an approach to extract accurate color edge information using encoded patterns in hue, saturation, and intensity (HSI) color space. This method is applied to one-shot shape acquisition. Theoretical analysis shows that the hue transition between primary and secondary colors in a color edge is based on light interference and diffraction. We set up a color transition model to illustrate the hue transition on an edge and then define the segmenting position of two stripes. By setting up an adaptive HSI color space, the colors of the stripes and subpixel edges are obtained precisely without a dark laboratory environment, in a low-cost processing algorithm. Since this method does not have any constraints for colors of neighboring stripes, the encoding is an easy procedure. The experimental results show that the edges of dense modulation patterns can be obtained under a complicated environment illumination, and the precision can ensure that the three-dimensional shape of the object is obtained reliably with only one image.

  2. Mosaicking of NEAR MSI Color Image Sequences

    NASA Astrophysics Data System (ADS)

    Digilio, J. G.; Robinson, M. S.

    2004-05-01

    Of the over 160,000 frames of 433 Eros captured by the NEAR-Shoemaker spacecraft, 21,936 frames are components of 226 multi-spectral image sequences. As part of the ongoing NEAR Data Analysis Program, we are mosaicking (and delivering via a web interface) all color sequences in two versions: I/F and photometrically normalized I/F (30° incidence, 0° emission). Multi-spectral sets were acquired with varying bandpasses depending on mission constraints, and all sets include 550-nm, 760-nm, and 950-nm (32% of the sequences are all wavelengths except 700-nm clear filter). Resolutions range from 20 m/pixel down to 3.5 m/pixel. To support color analysis and interpretation we are co-registering the highest resolution black and white images to match each of the color mosaics. Due to Eros's highly irregular shape, the scale of a pixel can vary by almost a factor of 2 within a single frame acquired in the 35-km orbit. Thus, map-projecting requires a pixel-by-pixel correction for local topography [1]. Scattered light problems with the NEAR Multi-Spectral Imager (MSI) required the acquisition of ride along zero exposure calibration frames. Without correction, scattered light artifacts within the MSI were larger than the subtle color differences found on Eros [see details in 2]. Successful correction requires that the same region of the surface (within a few pixels) be in the field-of-view of the zero-exposure frame as when the normal frame was acquired. Due to engineering constraints the timing of frame acquisition was not always optimal for the scattered light correction. During the co-registration process we are tracking apparent ground motion during a sequence to estimate the efficacy of the correction, and thus integrity of the color information. Currently several web-based search and browse tools allow interested users to locate individual MSI frames from any spot on the asteroid using various search criteria (cps.earth.northwestern.edu). Final color and BW map products

  3. Restoration Of Faded Color Photographs By Digital Image Processing

    NASA Astrophysics Data System (ADS)

    Gschwind, Rudolf

    1989-10-01

    Color photographs possess a poor stability towards light, chemicals heat and humidity. As a consequence, the colors of photographs deteriorate with time. Because of the complexity of processes that cause the dyes to fade, it is impossible to restore the images by chemical means. It is therefore attempted to restore faded color films by means of digital image processing.

  4. Dissociation of equilibrium points for color-discrimination and color-appearance mechanisms in incomplete chromatic adaptation.

    PubMed

    Sato, Tomoharu; Nagai, Takehiro; Kuriki, Ichiro; Nakauchi, Shigeki

    2016-03-01

    We compared the color-discrimination thresholds and supra-threshold color differences (STCDs) obtained in complete chromatic adaptation (gray) and incomplete chromatic adaptation (red). The color-difference profiles were examined by evaluating the perceptual distances between various color pairs using maximum likelihood difference scaling. In the gray condition, the chromaticities corresponding with the smallest threshold and the largest color difference were almost identical. In contrast, in the red condition, they were dissociated. The peaks of the sensitivity functions derived from the color-discrimination thresholds and STCDs along the L-M axis were systematically different between the adaptation conditions. These results suggest that the color signals involved in color discrimination and STCD tasks are controlled by separate mechanisms with different characteristic properties.

  5. Segmentation and Classification of Burn Color Images

    DTIC Science & Technology

    2007-11-02

    SEGMENTATION AND CLASSIFICATION OF BURN COLOR IMAGES Begoña Acha1, Carmen Serrano1, Laura Roa2 1Área de Teoría de la Señal y Comunicaciones ...2Grupo de Ingeniería Biomédica. Escuela Superior de Ingenieros. Universidad de Sevilla. Spain. e -mail: bacha@viento.us.es, cserrano@viento.us.es...IEEE Trans. on Biomedical Engineering, vol. 43, no. 10, pp. 1011-1020, Oct. 1996. [10] G. A. Hance, S. E . Umbaugh, R. H. Moss, W. V. Stoecker

  6. Imaging an Adapted Dentoalveolar Complex

    PubMed Central

    Herber, Ralf-Peter; Fong, Justine; Lucas, Seth A.; Ho, Sunita P.

    2012-01-01

    Adaptation of a rat dentoalveolar complex was illustrated using various imaging modalities. Micro-X-ray computed tomography for 3D modeling, combined with complementary techniques, including image processing, scanning electron microscopy, fluorochrome labeling, conventional histology (H&E, TRAP), and immunohistochemistry (RANKL, OPN) elucidated the dynamic nature of bone, the periodontal ligament-space, and cementum in the rat periodontium. Tomography and electron microscopy illustrated structural adaptation of calcified tissues at a higher resolution. Ongoing biomineralization was analyzed using fluorochrome labeling, and by evaluating attenuation profiles using virtual sections from 3D tomographies. Osteoclastic distribution as a function of anatomical location was illustrated by combining histology, immunohistochemistry, and tomography. While tomography and SEM provided past resorption-related events, future adaptive changes were deduced by identifying matrix biomolecules using immunohistochemistry. Thus, a dynamic picture of the dentoalveolar complex in rats was illustrated. PMID:22567314

  7. Evaluation of color-embedded wavelet image compression techniques

    NASA Astrophysics Data System (ADS)

    Saenz, Martha; Salama, Paul; Shen, Ke; Delp, Edward J., III

    1998-12-01

    Color embedded image compression is investigated by means of a set of core experiments that seek to evaluate the advantages of various color transformations, spatial orientation trees and the use of monochrome embedded coding schemes such as EZW and SPIHT. In order to take advantage of the interdependencies of the color components for a given color space, two new spatial orientation trees that relate frequency bands and color components are investigated.

  8. Multiresolution ARMA modeling of facial color images

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Al-Jarrah, Inad

    2002-05-01

    Human face perception is the key to identify confirmation in security systems, video teleconference, picture telephony, and web navigation. Modeling of human faces and facial expressions for different persons can be dealt with by building a point distribution model (PDM) based on spatial (shape) information or a gray-level model (GLM) based on spectral (intensity) information. To avoid short-comings of the local modeling of PDM and GLM, we propose a new approach for recognizing human faces and discriminating expressions associated with them in color images. It is based on the Laplacian of Gaussian (LoG) edge detection, KL transformation, and auto-regressive moving average (ARMA) filtering. First, the KL transform is applied to the R, G, and B dimensions, and a facial image is described by its principal component. A LoG edge-detector is then used for line drawing schematic of a face. The resultant face silhouette is divided into 5 X 5 non-overlapping blocks, each of which is represented by the auto-regressive (AR) parameter vector a. The ensample average of a over the whole image is taken as the feature vector for the description of a facial pattern. Each face class is represented by such ensample average vector a. Efficacy of the ARMA model is evaluated by the non-metric similarity measure S equals a.b/a.b for two facial images whose feature vectors, and a and b, are the ensample average of their ARMA parameters. Our measurements show that the ARMA modeling is effective for discriminating facial features in color images, and has the potential of distinguishing the corresponding facial expressions.

  9. Simple color conversion method to perceptible images for color vision deficiencies

    NASA Astrophysics Data System (ADS)

    Meguro, Mitsuhiko; Takahashi, Chihiro; Koga, Toshio

    2006-02-01

    In this paper, we propose a color conversion method for realizing barrier free systems for color-defective vision. Human beings are perceiving colors by a ratio of reaction values by three kinds of cones on the retina. The three cones have different sensitivity to a wavelength of light. Nevertheless, dichromats, who are lacking of one of the three cones, tends to be diffcult for discriminating colors of a certain combination. The proposed techniques make new images by converting color for creating perceptible combination of color. The proposed method has three parts of processes. Firstly, we do image segmentation based on the color space L*a*b*. Secondly, we judge whether mean colors of divided regions of the segmented image tend to be confusion or not by using confusion color loci and color vision models of the persons with color-defective vision. Finally, the proposed technique realizes the perceptible images for dichromats by changing the confusion color in several regions of images. We show how effectiveness of the method by some application results.

  10. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  11. A color image processing pipeline for digital microscope

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  12. Color reproductivity improvement with additional virtual color filters for WRGB image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2013-02-01

    We have developed a high accuracy color reproduction method based on an estimated spectral reflectance of objects using additional virtual color filters for a wide dynamic range WRGB color filter CMOS image sensor. The four virtual color filters are created by multiplying the spectral sensitivity of White pixel by gauss functions which have different central wave length and standard deviation, and the virtual sensor outputs of those virtual filters are estimated from the four real output signals of the WRGB image sensor. The accuracy of color reproduction was evaluated with a Macbeth Color Checker (MCC), and the averaged value of the color difference ΔEab of 24 colors was 1.88 with our approach.

  13. Mississippi Delta, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The geography of the New Orleans and Mississippi delta region is well shown in this radar image from the Shuttle Radar Topography Mission. In this image, bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is situated along the southern shore of Lake Pontchartrain, the large, roughly circular lake near the center of the image. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest over water highway bridge. Major portions of the city of New Orleans are below sea level, and although it is protected by levees and sea walls, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. The mission used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar that flew twice on the Space Shuttle Endeavour in 1994. The Shuttle Radar Topography Mission was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data

  14. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  15. Color Composite Image of the Supernova Remnant

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This image is a color composite of the supernova remnant E0102-72: x-ray (blue), optical (green), and radio (red). E0102-72 is the remnant of a star that exploded in a nearby galaxy known as the Small Magellanic Cloud. The star exploded outward at speeds in excess of 20 million kilometers per hour (12 million mph) and collided with surrounding gas. This collision produced two shock waves, or cosmic sonic booms, one traveling outward, and the other rebounding back into the material ejected by the explosion. The radio image, shown in red, was made using the Australia Telescope Compact Array. The radio waves are due to extremely high-energy electrons spiraling around magnetic field lines in the gas and trace the outward moving shock wave. The Chandra X-ray Observatory image, shown in blue, shows gas that has been heated to millions of degrees by the rebounding, or reverse shock wave. The x-ray data show that this gas is rich in oxygen and neon. These elements were created by nuclear reactions inside the star and hurled into space by the supernova. The Hubble Space Telescope optical image, shown in green, shows dense clumps of oxygen gas that have 'cooled' to about 30,000 degrees. Photo Credit: X-ray (NASA/CXC/SAO); optical (NASA/HST): radio: (ACTA)

  16. Radar Image, Color as Height , Salalah, Oman

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This radar image includes the city of Salalah, the second largest city in Oman. It illustrates how topography determines local climate and, in turn, where people live. This area on the southern coast of the Arabian Peninsula is characterized by a narrow coastal plain (bottom) facing southward into the Arabian Sea, backed by the steep escarpment of the Qara Mountains. The backslope of the Qara Mountains slopes gently into the vast desert of the Empty Quarter (at top). This area is subject to strong monsoonal storms from the Arabian Sea during the summer, when the mountains are enveloped in a sort of perpetual fog. The moisture from the monsoon enables agriculture on the Salalah plain, and also provides moisture for Frankincense trees growing on the desert (north) side of the mountains. In ancient times, incense derived from the sap of the Frankincense tree was the basis for an extremely lucrative trade. Radar and topographic data are used by historians and archaeologists to discover ancient trade routes and other significant ruins.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from green at the lowest elevations to brown at the highest elevations. This image contains about 1070 meters (3500 feet) of total relief. White speckles on the face of some of the mountains are holes in the data caused by steep terrain. These will be filled using coverage from an intersecting pass.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter

  17. Color image encryption based on fractional Fourier transforms and pixel scrambling technique

    NASA Astrophysics Data System (ADS)

    Zhao, Jianlin; Lu, Hongqiang; Fan, Qi

    2007-01-01

    Color image encryption based on fractional Fourier transform (FRT) and pixel scrambling technique is presented in this paper. In general, color (RGB) image cannot be directly encrypted using a traditional setup for optical information processing, because which is only adapted to process two-dimensional gray image. In the proposed method, a three-dimensional RGB image is decomposed to three two-dimensional gray images (R, G and B values of the color image), and the encryption operation will be done on each two-dimensional gray image, then the encoded color image is available by composing the three two-dimensional encrypted images. The decryption process is an inverse of the encryption. The optical encrypting systems based on the presented method is proposed and simulated. Some results of computer simulation are presented to verify the flexibility and the reliability of this method. The quality of decrypted images would be debased with the difference of the fractional order. The frustrated decryption of monochromic image will affect the color of decrypted image. At the end of this paper, an all-optical and photoelectric encryption/decryption systems solution are presented, and the principle of selecting optical devices are also given.

  18. Polaroid Graphics Imaging Direct Digital Color Proofing

    NASA Astrophysics Data System (ADS)

    King, Patrick F.

    1989-04-01

    Good morning ladies and gentlemen. I represent Polaroid Graphics Imaging, a wholly owned subsidiary of the Polaroid Corporation. We wish to thank Ken Cloud and the SPIE for the opportunity to speak today. Several criterion are fundamental in the role for Direct Digital Color Proofing (DDCP), First, the DDCP must represent a first generation hardcopy of the exact color information in the production stream. If must, as it's name suggests be an exact, proof (hence the name direct) of the electronic or digital information which would otherwise be directed toward film working. It is after all the most critical means to evaluate the quality of whatever pagination, scanner or color work which has gone be for it. Second, the DDCP must represent an opportunity. That opportunity is to reconvene the production stream and move to film making, optical or magnetic storage, or satellite transmission with the confidence that the DDCP is identical to some conventional counterpart. In the case of film it must match a conventional proof and press sheet, dot for dot. Otherwise it is merely an exercise in interpretation. For magnetic or optical storage and satellite transmission there must be assurance that at any opportunity either a duplicate DDCP or a conventional film/proof could reproduce earlier results. Finally as the printed product is the final goal and direct to press is evolving in direct to plate and direct to gravure printing the DDCP must share the half toner lineage of these products. Thirdly and hardly least, the whole purpose for DDCP is increased productivity. However, our industry struggles to maintain individuality and variety. Somehow DDCP must balance these forces.

  19. Colorimetry-based edge preservation approach for color image enhancement

    NASA Astrophysics Data System (ADS)

    Suresh, Merugu; Jain, Kamal

    2016-07-01

    "Subpixel-based downsampling" is an approach that can implicitly enhance perceptible image resolution of a downsampled image by managing subpixel-level representation preferably with individual pixel. A subpixel-level representation for color image sample at edge region and color image representation is focused with the problem of directional filtration based on horizontal and vertical orientations using colorimetric color space with the help of saturation and desaturation pixels. A diagonal tracing algorithm and an edge preserving approach with colorimetric color space were used for color image enhancement. Since, there exist high variations at the edge regions, it could not be considered as constant or zero, and when these variations are random the need to compensate these to minimum value and then process for image representation. Finally, the results of the proposed method show much better image information as compared with traditional direct pixel-based methods with increased luminance and chrominance resolutions.

  20. Tiny Devices Project Sharp, Colorful Images

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Displaytech Inc., based in Longmont, Colorado and recently acquired by Micron Technology Inc. of Boise, Idaho, first received a Small Business Innovation Research contract in 1993 from Johnson Space Center to develop tiny, electronic, color displays, called microdisplays. Displaytech has since sold over 20 million microdisplays and was ranked one of the fastest growing technology companies by Deloitte and Touche in 2005. Customers currently incorporate the microdisplays in tiny pico-projectors, which weigh only a few ounces and attach to media players, cell phones, and other devices. The projectors can convert a digital image from the typical postage stamp size into a bright, clear, four-foot projection. The company believes sales of this type of pico-projector may exceed $1.1 billion within 5 years.

  1. Color accuracy and reproducibility in whole slide imaging scanners

    PubMed Central

    Shrestha, Prarthana; Hulsken, Bas

    2014-01-01

    Abstract We propose a workflow for color reproduction in whole slide imaging (WSI) scanners, such that the colors in the scanned images match to the actual slide color and the inter-scanner variation is minimum. We describe a new method of preparation and verification of the color phantom slide, consisting of a standard IT8-target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several International Color Consortium (ICC) compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color space. Based on the quality of the color reproduction in histopathology slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed workflow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We quantify color difference using the CIE-DeltaE2000 metric, where DeltaE values below 1 are considered imperceptible. Our evaluation on 14 phantom slides, manufactured according to the proposed method, shows an average inter-slide color difference below 1 DeltaE. The proposed workflow is implemented and evaluated in 35 WSI scanners developed at Philips, called the Ultra Fast Scanners (UFS). The color accuracy, measured as DeltaE between the scanner reproduced colors and the reference colorimetric values of the phantom patches, is improved on average to 3.5 DeltaE in calibrated scanners from 10 DeltaE in uncalibrated scanners. The average inter-scanner color difference is found to be 1.2 DeltaE. The improvement in color performance upon using the proposed method is apparent with the visual color quality of the tissue scans. PMID:26158041

  2. Oral lesion classification using true-color images

    NASA Astrophysics Data System (ADS)

    Chodorowski, Artur; Mattsson, Ulf; Gustavsson, Tomas

    1999-05-01

    The aim of the study was to investigate effective image analysis methods for the discrimination of two oral lesions, oral lichenoid reactions and oral leukoplakia, using only color information. Five different color representations (RGB, Irg, HSI, I1I2I3 and La*b*) were studied and their use for color analysis of mucosal images evaluated. Four common classifiers (Fisher's linear discriminant, Gaussian quadratic, kNN-Nearest Neighbor and Multilayer Perceptron) were chosen for the evaluation of classification performance. The feature vector consisted of the mean color difference between abnormal and normal regions extracted from digital color images. Classification accuracy was estimated using resubstitution and 5-fold crossvalidation methods. The best classification results were achieved in HSI color system and using linear discriminant function. In total, 70 out of 74 (94.6%) lichenoid reactions and 14 out of 20 (70.0%) of leukoplakia were correctly classified using only color information.

  3. Quaternion-Michelson Descriptor for Color Image Classification.

    PubMed

    Lan, Rushi; Zhou, Yicong

    2016-09-02

    In this paper, we develop a simple yet powerful framework called Quaternion-Michelson Descriptor (QMD) to extract local features for color image classification. Unlike traditional local descriptors extracted directly from the original (raw) image space, QMD is derived from the Michelson contrast law and the quaternionic representation (QR) of color images. The Michelson contrast is a stable measurement of image contents from the viewpoint of human perception, while QR is able to handle all the color information of the image holisticly and to preserve the interactions among different color channels. In this way, QMD integrates both merits of Michelson contrast and QR. Based on the QMD framework, we further propose two novel Quaternionic Michelson Contrast Binary Pattern (QMCBP) descriptors from different perspectives. Experiments and comparisons on different color image classification databases demonstrate that the proposed framework and descriptors outperform several state-of-the-art methods.

  4. Quaternion-Michelson Descriptor for Color Image Classification.

    PubMed

    Lan, Rushi; Zhou, Yicong

    2016-11-01

    In this paper, we develop a simple yet powerful framework called quaternion-Michelson descriptor (QMD) to extract local features for color image classification. Unlike traditional local descriptors extracted directly from the original (raw) image space, QMD is derived from the Michelson contrast law and the quaternionic representation (QR) of color images. The Michelson contrast is a stable measurement of image contents from the viewpoint of human perception, while QR is able to handle all the color information of the image holisticly and to preserve the interactions among different color channels. In this way, QMD integrates both the merits of Michelson contrast and QR. Based on the QMD framework, we further propose two novel quaternionic Michelson contrast binary pattern descriptors from different perspectives. Experiments and comparisons on different color image classification databases demonstrate that the proposed framework and descriptors outperform several state-of-the-art methods.

  5. [A medical image color correction method based on supervised color constancy].

    PubMed

    Xu, Jiatuo; Tu, Liping; Zhang, Zhifeng; Zhou, Changle

    2010-08-01

    This paper presents a medical image acquisition and analysis method-TRM (Topology Resolve-Map) Model-under natural light condition indoors. Firstly, in accordance to medical image color characteristics, a colorful and grayscale color control patch was made for use as supervised color. "Topology Resolve-Map-Restoration" was carried on in LAB color space of the one-dimensional L* space and the two-dimensional a* b* space. Then, L* value was regulated by subsection regulation and a* b* value was regulated by triangulation topological cutting--close in on center of gravity method. After correction of the 198 color blocks in 22 pictures, the results showed that, by comparison with the standard value, the deltaL*, deltaC* and deltaE decreased significantly (P < 0.01) after correction by TRM. After correction, the difference in image's color is reduced, the color saturation is improved and the value is closer to true value. TRM model can significantly reduce the color difference of the medical image under natural light condition; it has a good effect on color correction.

  6. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-06-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired.

  7. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  8. 24-bit color image quantization for 8-bits color display based on Y-Cr-Cb

    NASA Astrophysics Data System (ADS)

    Chang, Long-Wen; Liu, Tsann-Shyong

    1993-10-01

    A new fast algorithm that can display true 24-bits color images of JPEG and MPEG on a 8 bits color display is described. Instead of generating a colormap in the R-G-B color space conventionally, we perform analysis of color images based on the Y-Cr-Cb color space. By using Bayes decision rule, the representative values for Y component are selected based on its histogram. Then, the representative values for Cr and Cb components are determined by their conditional histogram assuming Y. Finally, a fast lookup table that can generate R-G-B outputs for Y-Cr-Cb inputs without matrix transformation is addressed. The experimental results show that good-looking quality color quantization images can be achieved by our proposed algorithm.

  9. Different colors of light lead to different adaptation and activation as determined by high-density EEG.

    PubMed

    Münch, M; Plomp, G; Thunell, E; Kawasaki, A; Scartezzini, J L; Herzog, M H

    2014-11-01

    Light adaptation is crucial for coping with the varying levels of ambient light. Using high-density electroencephalography (EEG), we investigated how adaptation to light of different colors affects brain responsiveness. In a within-subject design, sixteen young participants were adapted first to dim white light and then to blue, green, red, or white bright light (one color per session in a randomized order). Immediately after both dim and bright light adaptation, we presented brief light pulses and recorded event-related potentials (ERPs). We analyzed ERP response strengths and brain topographies and determined the underlying sources using electrical source imaging. Between 150 and 261 ms after stimulus onset, the global field power (GFP) was higher after dim than bright light adaptation. This effect was most pronounced with red light and localized in the frontal lobe, the fusiform gyrus, the occipital lobe and the cerebellum. After bright light adaptation, within the first 100 ms after light onset, stronger responses were found than after dim light adaptation for all colors except for red light. Differences between conditions were localized in the frontal lobe, the cingulate gyrus, and the cerebellum. These results indicate that very short-term EEG brain responses are influenced by prior light adaptation and the spectral quality of the light stimulus. We show that the early EEG responses are differently affected by adaptation to different colors of light which may contribute to known differences in performance and reaction times in cognitive tests.

  10. Specialized Color Targets for Spectral Reflectance Reconstruction of Magnified Images

    NASA Astrophysics Data System (ADS)

    Kruschwitz, Jennifer D. T.

    Digital images are used almost exclusively instead of film to capture visual information across many scientific fields. The colorimetric color representation within these digital images can be relayed from the digital counts produced by the camera with the use of a known color target. In image capture of magnified images, there is currently no reliable color target that can be used at multiple magnifications and give the user a solid understanding of the color ground truth within those images. The first part of this dissertation included the design, fabrication, and testing of a color target produced with optical interference coated microlenses for use in an off-axis illumination, compound microscope. An ideal target was designed to increase the color gamut for colorimetric imaging and provide the necessary "Block Dye" spectral reflectance profiles across the visible spectrum to reduce the number of color patches necessary for multiple filter imaging systems that rely on statistical models for spectral reflectance reconstruction. There are other scientific disciplines that can benefit from a specialized color target to determine the color ground truth in their magnified images and perform spectral estimation. Not every discipline has the luxury of having a multi-filter imaging system. The second part of this dissertation developed two unique ways of using an interference coated color mirror target: one that relies on multiple light-source angles, and one that leverages a dynamic color change with time. The source multi-angle technique would be used for the microelectronic discipline where the reconstructed spectral reflectance would be used to determine a dielectric film thickness on a silicon substrate, and the time varying technique would be used for a biomedical example to determine the thickness of human tear film.

  11. Color image encryption scheme using CML and DNA sequence operations.

    PubMed

    Wang, Xing-Yuan; Zhang, Hui-Li; Bao, Xue-Mei

    2016-06-01

    In this paper, an encryption algorithm for color images using chaotic system and DNA (Deoxyribonucleic acid) sequence operations is proposed. Three components for the color plain image is employed to construct a matrix, then perform confusion operation on the pixels matrix generated by the spatiotemporal chaos system, i.e., CML (coupled map lattice). DNA encoding rules, and decoding rules are introduced in the permutation phase. The extended Hamming distance is proposed to generate new initial values for CML iteration combining color plain image. Permute the rows and columns of the DNA matrix and then get the color cipher image from this matrix. Theoretical analysis and experimental results prove the cryptosystem secure and practical, and it is suitable for encrypting color images of any size.

  12. Asymmetric color image encryption based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Yao, Lili; Yuan, Caojin; Qiang, Junjie; Feng, Shaotong; Nie, Shouping

    2017-02-01

    A novel asymmetric color image encryption approach by using singular value decomposition (SVD) is proposed. The original color image is encrypted into a ciphertext shown as an indexed image by using the proposed method. The red, green and blue components of the color image are subsequently encoded into a complex function which is then separated into U, S and V parts by SVD. The data matrix of the ciphertext is obtained by multiplying orthogonal matrices U and V while implementing phase-truncation. Diagonal entries of the three diagonal matrices of the SVD results are abstracted and scrambling combined to construct the colormap of the ciphertext. Thus, the encrypted indexed image covers less space than the original image. For decryption, the original color image cannot be recovered without private keys which are obtained from phase-truncation and the orthogonality of V. Computer simulations are presented to evaluate the performance of the proposed algorithm. We also analyze the security of the proposed system.

  13. Imaging Radio Galaxies with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    de Vries, W. H.; van Breugel, W. J. M.; Quirrenbach, A.; Roberts, J.; Fidkowski, K.

    2000-12-01

    We present 42 milli-arcsecond resolution Adaptive Optics near-infrared images of 3C 452 and 3C 294, two powerful radio galaxies at z=0.081 and z=1.79 respectively, obtained with the NIRSPEC/SCAM+AO instrument on the Keck telescope. The observations provide unprecedented morphological detail of radio galaxy components like nuclear dust-lanes, off-centered or binary nuclei, and merger induced starforming structures; all of which are key features in understanding galaxy formation and the onset of powerful radio emission. Complementary optical HST imaging data are used to construct high resolution color images, which, for the first time, have matching optical and near-IR resolutions. Based on these maps, the extra-nuclear structural morphologies and compositions of both galaxies are discussed. Furthermore, detailed brightness profile analysis of 3C 452 allows a direct comparison to a large literature sample of nearby ellipticals, all of which have been observed in the optical and near-IR by HST. Both the imaging data and the profile information on 3C 452 are consistent with it being a relative diminutive and well-evolved elliptical, in stark contrast to 3C 294 which seems to be in its initial formation throes with an active AGN off-centered from the main body of the galaxy. These results are discussed further within the framework of radio galaxy triggering and the formation of massive ellipticals. The work of WdV and WvB was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. The work at UCSD has been supported by the NSF Science and Technology Center for Adaptive Optics, under agreement No. AST-98-76783.

  14. Evaluation of color error and noise on simulated images

    NASA Astrophysics Data System (ADS)

    Mornet, Clémence; Vaillant, Jérôme; Decroux, Thomas; Hérault, Didier; Schanen, Isabelle

    2010-01-01

    The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which allows quality parameters to be evaluated on simulated images. These images are computed based on measured or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI) has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with ST under development sensors. Finally we present two applications one based on the trade-offs between color saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.

  15. Ghost diffraction and ghost imaging in two-color ghost imaging

    NASA Astrophysics Data System (ADS)

    Yong, Pei; Fu, Xi-quan

    2016-10-01

    Ghost diffraction and ghost imaging in two-color ghost imaging are investigated with pseudo-thermal light. Based on the extended Huygens-Fresnel integral, the ghost diffraction (GD) and ghost imaging (GI) condition in two-color ghost imaging has been demonstrated. It is shown that GD and GI fringes in two-color ghost imaging can be obtained by conforming the GI condition and GD condition, respectively. The exchange of ghost diffraction and ghost imaging can be obtained by only changing one of the wavelength of two-color source when compared with the signal-color ghost imaging, and the condition of GD and GI in signal-color ghost imaging are a special case of two-color ghost imaging. The simulation results agree well with the theoretical analysis.

  16. Information-Adaptive Image Encoding and Restoration

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition. A number of algorithms exist that provide one or more of these features, but not all. In this paper we compare the performance of the MSRCR with techniques that are widely used for image enhancement. Specifically, we compare the MSRCR with color adjustment methods such as gamma correction and gain/offset application, histogram modification techniques such as histogram equalization and manual histogram adjustment, and other more powerful techniques such as homomorphic filtering and 'burning and dodging'. The comparison is carried out by testing the suite of image enhancement methods on a set of diverse images. We find that though some of these techniques work well for some of these images, only the MSRCR performs universally well oil the test set.

  17. Scene recognition and colorization for vehicle infrared images

    NASA Astrophysics Data System (ADS)

    Hou, Junjie; Sun, Shaoyuan; Shen, Zhenyi; Huang, Zhen; Zhao, Haitao

    2016-10-01

    In order to make better use of infrared technology for driving assistance system, a scene recognition and colorization method is proposed in this paper. Various objects in a queried infrared image are detected and labelled with proper categories by a combination of SIFT-Flow and MRF model. The queried image is then colorized by assigning corresponding colors according to the categories of the objects appeared. The results show that the strategy here emphasizes important information of the IR images for human vision and could be used to broaden the application of IR images for vehicle driving.

  18. Color accuracy and reproducibility in whole slide imaging scanners

    NASA Astrophysics Data System (ADS)

    Shrestha, Prarthana; Hulsken, Bas

    2014-03-01

    In this paper, we propose a work-flow for color reproduction in whole slide imaging (WSI) scanners such that the colors in the scanned images match to the actual slide color and the inter scanner variation is minimum. We describe a novel method of preparation and verification of the color phantom slide, consisting of a standard IT8- target transmissive film, which is used in color calibrating and profiling the WSI scanner. We explore several ICC compliant techniques in color calibration/profiling and rendering intents for translating the scanner specific colors to the standard display (sRGB) color-space. Based on the quality of color reproduction in histopathology tissue slides, we propose the matrix-based calibration/profiling and absolute colorimetric rendering approach. The main advantage of the proposed work-ow is that it is compliant to the ICC standard, applicable to color management systems in different platforms, and involves no external color measurement devices. We measure objective color performance using CIE-DeltaE2000 metric, where DeltaE values below 1 is considered imperceptible. Our evaluation 14 phantom slides, manufactured according to the proposed method, show an average inter-slide color difference below 1 DeltaE. The proposed work-flow is implemented and evaluated in 35 Philips Ultra Fast Scanners (UFS). The results show that the average color difference between a scanner and the reference is 3.5 DeltaE, and among the scanners is 3.1 DeltaE. The improvement on color performance upon using the proposed method is apparent on the visual color quality of the tissues scans.

  19. New Colors for Histology: Optimized Bivariate Color Maps Increase Perceptual Contrast in Histological Images

    PubMed Central

    Kather, Jakob Nikolas; Weis, Cleo-Aron; Marx, Alexander; Schuster, Alexander K.; Schad, Lothar R.; Zöllner, Frank Gerrit

    2015-01-01

    Background Accurate evaluation of immunostained histological images is required for reproducible research in many different areas and forms the basis of many clinical decisions. The quality and efficiency of histopathological evaluation is limited by the information content of a histological image, which is primarily encoded as perceivable contrast differences between objects in the image. However, the colors of chromogen and counterstain used for histological samples are not always optimally distinguishable, even under optimal conditions. Methods and Results In this study, we present a method to extract the bivariate color map inherent in a given histological image and to retrospectively optimize this color map. We use a novel, unsupervised approach based on color deconvolution and principal component analysis to show that the commonly used blue and brown color hues in Hematoxylin—3,3’-Diaminobenzidine (DAB) images are poorly suited for human observers. We then demonstrate that it is possible to construct improved color maps according to objective criteria and that these color maps can be used to digitally re-stain histological images. Validation To validate whether this procedure improves distinguishability of objects and background in histological images, we re-stain phantom images and N = 596 large histological images of immunostained samples of human solid tumors. We show that perceptual contrast is improved by a factor of 2.56 in phantom images and up to a factor of 2.17 in sets of histological tumor images. Context Thus, we provide an objective and reliable approach to measure object distinguishability in a given histological image and to maximize visual information available to a human observer. This method could easily be incorporated in digital pathology image viewing systems to improve accuracy and efficiency in research and diagnostics. PMID:26717571

  20. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  1. SWT voting-based color reduction for text detection in natural scene images

    NASA Astrophysics Data System (ADS)

    Ikica, Andrej; Peer, Peter

    2013-12-01

    In this article, we propose a novel stroke width transform (SWT) voting-based color reduction method for detecting text in natural scene images. Unlike other text detection approaches that mostly rely on either text structure or color, the proposed method combines both by supervising text-oriented color reduction process with additional SWT information. SWT pixels mapped to color space vote in favor of the color they correspond to. Colors receiving high SWT vote most likely belong to text areas and are blocked from being mean-shifted away. Literature does not explicitly address SWT search direction issue; thus, we propose an adaptive sub-block method for determining correct SWT direction. Both SWT voting-based color reduction and SWT direction determination methods are evaluated on binary (text/non-text) images obtained from a challenging Computer Vision Lab optical character recognition database. SWT voting-based color reduction method outperforms the state-of-the-art text-oriented color reduction approach.

  2. High-speed Digital Color Imaging Pyrometry

    DTIC Science & Technology

    2011-08-01

    and environment of the events. To overcome these challenges, we have characterized and calibrated a digital high-speed color camera that may be...correction) to determine their effect on the calculated temperature. Using this technique with a Phantom color camera , we measured the temperature of...constant value of approximately 1980~K. 15. SUBJECT TERMS Pyrometry, color camera 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT

  3. Luminance contours can gate afterimage colors and "real" colors.

    PubMed

    Anstis, Stuart; Vergeer, Mark; Van Lier, Rob

    2012-09-06

    It has long been known that colored images may elicit afterimages in complementary colors. We have already shown (Van Lier, Vergeer, & Anstis, 2009) that one and the same adapting image may result in different afterimage colors, depending on the test contours presented after the colored image. The color of the afterimage depends on two adapting colors, those both inside and outside the test. Here, we further explore this phenomenon and show that the color-contour interactions shown for afterimage colors also occur for "real" colors. We argue that similar mechanisms apply for both types of stimulation.

  4. Multiple-wavelength Color Digital Holography for Monochromatic Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Cheremkhin, P. A.; Shevkunov, I. A.; Petrov, N. V.

    In this paper, we consider the opposite problem, namely, using of color digital holograms simultaneously recorded on several wavelengths for the reconstruction of monochromatic images. Special feature of the procedure of monochromatic image reconstruction from the color hologram is the necessity of extracting information from separate spectral channels with a corresponding overlaying of obtained images to avoid mismatching of their spatial position caused by dependence of methods of numerical reconstruction from the laser wavelength.

  5. Refinement of Colored Mobile Mapping Data Using Intensity Images

    NASA Astrophysics Data System (ADS)

    Yamakawa, T.; Fukano, K.; Onodera, R.; Masuda, H.

    2016-06-01

    Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

  6. Color imaging technologies in the prepress industry

    NASA Astrophysics Data System (ADS)

    Silverman, Lee

    1992-05-01

    Over much of the last half century, electronic technologies have played an increasing role in the prepress production of film and plates prepared for printing presses. The last decade has seen an explosion of technologies capable of supplementing this production. The most outstanding technology infusing this growth has been the microcomputer, but other component technologies have also diversified the capacity for high-quality scanning of photographs. In addition, some fundamental software and affordable laser recorder technologies have provided new approaches to the merging of typographic and halftoned photographic data onto film. The next decade will evolve the methods and the technologies to achieve superior text and image communication on mass distribution media used in the printed page or instead of the printed page. This paper focuses on three domains of electronic prepress classified as the input, transformation, and output phases of the production process. The evolution of the component technologies in each of these three phases is described. The unique attributes in each are defined and then follows a discussion of the pertinent technologies which overlap all three domains. Unique to input is sensor technology and analogue to digital conversion. Unique to the transformation phase is the display on monitor for soft proofing and interactive processing. The display requires special technologies for digital frame storage and high-speed, gamma- compensated, digital to analogue conversion. Unique to output is the need for halftoning and binary recording device linearization or calibration. Specialized direct digital color technologies now allow color quality proofing without the need for writing intermediate separation films, but ultimately these technologies will be supplanted by direct printing technologies. First, dry film processing, then direct plate writing, and finally direct application of ink or toner onto paper at the 20 - 30 thousand impressions per

  7. Spatial imaging in color and HDR: prometheus unchained

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  8. Nonlocal Mumford-Shah regularizers for color image restoration.

    PubMed

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F; Vese, Luminita A

    2011-06-01

    We propose here a class of restoration algorithms for color images, based upon the Mumford-Shah (MS) model and nonlocal image information. The Ambrosio-Tortorelli and Shah elliptic approximations are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, texture is nonlocal in nature and requires semilocal/non-local information for efficient image denoising and restoration. Inspired from recent works (nonlocal means of Buades, Coll, Morel, and nonlocal total variation of Gilboa, Osher), we extend the local Ambrosio-Tortorelli and Shah approximations to MS functional (MS) to novel nonlocal formulations, for better restoration of fine structures and texture. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, color image super-resolution, and color filter array demosaicing. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. We also prove several characterizations of minimizers based upon dual norm formulations.

  9. Nonlinearities and adaptation of color vision from sequential principal curves analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Camps-Valls, Gustavo; Malo, Jesús

    2012-10-01

    Mechanisms of human color vision are characterized by two phenomenological aspects: the system is nonlinear and adaptive to changing environments. Conventional attempts to derive these features from statistics use separate arguments for each aspect. The few statistical explanations that do consider both phenomena simultaneously follow parametric formulations based on empirical models. Therefore, it may be argued that the behavior does not come directly from the color statistics but from the convenient functional form adopted. In addition, many times the whole statistical analysis is based on simplified databases that disregard relevant physical effects in the input signal, as, for instance, by assuming flat Lambertian surfaces. In this work, we address the simultaneous statistical explanation of the nonlinear behavior of achromatic and chromatic mechanisms in a fixed adaptation state and the change of such behavior (i.e., adaptation) under the change of observation conditions. Both phenomena emerge directly from the samples through a single data-driven method: the sequential principal curves analysis (SPCA) with local metric. SPCA is a new manifold learning technique to derive a set of sensors adapted to the manifold using different optimality criteria. Here sequential refers to the fact that sensors (curvilinear dimensions) are designed one after the other, and not to the particular (eventually iterative) method to draw a single principal curve. Moreover, in order to reproduce the empirical adaptation reported under D65 and A illuminations, a new database of colorimetrically calibrated images of natural objects under these illuminants was gathered, thus overcoming the limitations of available databases. The results obtained by applying SPCA show that the psychophysical behavior on color discrimination thresholds, discount of the illuminant, and corresponding pairs in asymmetric color matching emerge directly from realistic data regularities, assuming no a priori

  10. Objective color classification of ecstasy tablets by hyperspectral imaging.

    PubMed

    Edelman, Gerda; Lopatka, Martin; Aalders, Maurice

    2013-07-01

    The general procedure followed in the examination of ecstasy tablets for profiling purposes includes a color description, which depends highly on the observers' perception. This study aims to provide objective quantitative color information using visible hyperspectral imaging. Both self-manufactured and illicit tablets, created with different amounts of known colorants were analyzed. We derived reflectance spectra from hyperspectral images of these tablets, and successfully determined the most likely colorant used in the production of all self-manufactured tablets and four of five illicit tablets studied. Upon classification, the concentration of the colorant was estimated using a photon propagation model and a single reference measurement of a tablet of known concentration. The estimated concentrations showed a high correlation with the actual values (R(2) = 0.9374). The achieved color information, combined with other physical and chemical characteristics, can provide a powerful tool for the comparison of tablet seizures, which may reveal their origin.

  11. Uniform color spaces and natural image statistics.

    PubMed

    McDermott, Kyle C; Webster, Michael A

    2012-02-01

    Many aspects of visual coding have been successfully predicted by starting from the statistics of natural scenes and then asking how the stimulus could be efficiently represented. We started from the representation of color characterized by uniform color spaces, and then asked what type of color environment they implied. These spaces are designed to represent equal perceptual differences in color discrimination or appearance by equal distances in the space. The relative sensitivity to different axes within the space might therefore reflect the gamut of colors in natural scenes. To examine this, we projected perceptually uniform distributions within the Munsell, CIE L(*)u(*)v(*) or CIE L(*)a(*)b(*) spaces into cone-opponent space. All were elongated along a bluish-yellowish axis reflecting covarying signals along the L-M and S-(L+M) cardinal axes, a pattern typical (though not identical) to many natural environments. In turn, color distributions from environments were more uniform when projected into the CIE L(*)a(*)b(*) perceptual space than when represented in a normalized cone-opponent space. These analyses suggest the bluish-yellowish bias in environmental colors might be an important factor shaping chromatic sensitivity, and also suggest that perceptually uniform color metrics could be derived from natural scene statistics and potentially tailored to specific environments.

  12. Uniform color spaces and natural image statistics

    PubMed Central

    McDermott, Kyle C.; Webster, Michael A.

    2011-01-01

    Many aspects of visual coding have been successfully predicted by starting from the statistics of natural scenes and then asking how the stimulus could be efficiently represented. We started from the representation of color characterized by uniform color spaces, and then asked what type of color environment they implied. These spaces are designed to represent equal perceptual differences in color discrimination or appearance by equal distances in the space. The relative sensitivity to different axes within the space might therefore reflect the gamut of colors in natural scenes. To examine this, we projected perceptually uniform distributions within the Munsell, CIEL*u*v* or CIEL*a*b* spaces into cone-opponent space. All were elongated along a bluish-yellowish axis reflecting covarying signals along the L-M and S-L+M cardinal axes, a pattern typical (though not identical) to many natural environments. In turn, color distributions from environments were more uniform when projected into the CIEL*a*b* perceptual space than when represented in a normalized cone-opponent space. These analyses suggest the bluish-yellowish bias in environmental colors might be an important factor shaping chromatic sensitivity, and also suggest that perceptually uniform color metrics could be derived from natural scene statistics and potentially tailored to specific environments. PMID:22330376

  13. Cutoff due to pointwise degradations in color images.

    PubMed

    Golts, Alex; Schechner, Yoav Y

    2014-12-01

    Many studies analyze resolution limits in single-channel, pan-chromatic systems. However, color imaging is popular. Thus, there is a need for its modeling in terms of resolving capacity under noise. This work analyzes the probability of resolving details as a function of spatial frequency in color imaging. The analysis introduces theoretical bounds for performance, using optimal linear filtering and fusion operations. The work focuses on resolution loss caused strictly by noise, without the presence of imaging blur. It applies to full-field color systems, which do not compromise resolution by spatial multiplexing. The framework allows us to assess and optimize the ability of an imaging system to distinguish an object of given size and color under image noise.

  14. Color image quality in projection displays: a case study

    NASA Astrophysics Data System (ADS)

    Strand, Monica; Hardeberg, Jon Y.; Nussbaum, Peter

    2005-01-01

    Recently the use of projection displays has increased dramatically in different applications such as digital cinema, home theatre, and business and educational presentations. Even if the color image quality of these devices has improved significantly over the years, it is still a common situation for users of projection displays that the projected colors differ significantly from the intended ones. This study presented in this paper attempts to analyze the color image quality of a large set of projection display devices, particularly investigating the variations in color reproduction. As a case study, a set of 14 projectors (LCD and DLP technology) at Gjovik University College have been tested under four different conditions: dark and light room, with and without using an ICC-profile. To find out more about the importance of the illumination conditions in a room, and the degree of improvement when using an ICC-profile, the results from the measurements was processed and analyzed. Eye-One Beamer from GretagMacbeth was used to make the profiles. The color image quality was evaluated both visually and by color difference calculations. The results from the analysis indicated large visual and colorimetric differences between the projectors. Our DLP projectors have generally smaller color gamut than LCD projectors. The color gamuts of older projectors are significantly smaller than that of newer ones. The amount of ambient light reaching the screen is of great importance for the visual impression. If too much reflections and other ambient light reaches the screen, the projected image gets pale and has low contrast. When using a profile, the differences in colors between the projectors gets smaller and the colors appears more correct. For one device, the average ΔE*ab color difference when compared to a relative white reference was reduced from 22 to 11, for another from 13 to 6. Blue colors have the largest variations among the projection displays and makes them

  15. A New Color Image of the Crab Nebula

    NASA Astrophysics Data System (ADS)

    Wainscoat, R. J.; Kormendy, J.

    1997-03-01

    A new color image of the Crab Nebula is presented. This is a 2782 × 1904 pixel mosaic of CCD frames taken through B (blue), V (green), and R (red) filters; it was carefully color balanced so that the Sun would appear white. The resolution of the final image is approximately 0.8 arcsec FWHM. The technique by which this image was constructed is described, and some aspects of the structure of the Crab Nebula revealed by the image are discussed. We also discuss the weaknesses of this technique for producing ``true-color'' images, and describe how our image would differ from what the human eye might see in a very large wide-field telescope. The structure of the inner part of the synchrotron nebula is compared with recent high-resolution images from the Hubble Space Telescope and from the Canada-France-Hawaii Telescope. (SECTION: Interstellar Medium and Nebulae)

  16. Improved color interpolation method based on Bayer image

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2012-10-01

    Image sensors are important components of lunar exploration device. Considering volume and cost, image sensors generally adopt a single CCD or CMOS at the present time, and the surface of the sensor is covered with a layer of color filter array(CFA), which is usually Bayer CFA. In the Bayer CFA, each pixel can only get one of tricolor, so it is necessary to carry out color interpolation in order to get the full color image. An improved Bayer image interpolation method is presented, which is novel, practical, and also easy to be realized. The results of experiments to prove the effect of the interpolation are shown. Comparing with classic methods, this method can find edge of image more accurately, reduce the saw tooth phenomenon in the edge area, and keep the image smooth in other area. This method is applied successfully in a certain exploration imaging system.

  17. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    PubMed Central

    Ebied, Hala Mousher; Hussein, Ashraf Saad; Tolba, Mohamed Fahmy

    2014-01-01

    This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used. PMID:25254226

  18. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  19. Color image based sorter for separating red and white wheat

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A simple imaging system was developed to inspect and sort wheat samples and other grains at moderate feed-rates (30 kernels/s or 3.5 kg/h). A single camera captured color images of three sides of each kernel by using mirrors, and the images were processed using a personal computer (PC). The camer...

  20. Pixel classification based color image segmentation using quaternion exponent moments.

    PubMed

    Wang, Xiang-Yang; Wu, Zhi-Fang; Chen, Liang; Zheng, Hong-Liang; Yang, Hong-Ying

    2016-02-01

    Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. In this paper, we propose a pixel classification based color image segmentation using quaternion exponent moments. Firstly, the pixel-level image feature is extracted based on quaternion exponent moments (QEMs), which can capture effectively the image pixel content by considering the correlation between different color channels. Then, the pixel-level image feature is used as input of twin support vector machines (TSVM) classifier, and the TSVM model is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained TSVM model. The proposed scheme has the following advantages: (1) the effective QEMs is introduced to describe color image pixel content, which considers the correlation between different color channels, (2) the excellent TSVM classifier is utilized, which has lower computation time and higher classification accuracy. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature.

  1. Adaptive local linear regression with application to printer color management.

    PubMed

    Gupta, Maya R; Garcia, Eric K; Chin, Erika

    2008-06-01

    Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples.

  2. Color image digitization and analysis for drum inspection

    SciTech Connect

    Muller, R.C.; Armstrong, G.A.; Burks, B.L.; Kress, R.L.; Heckendorn, F.M.; Ward, C.R.

    1993-05-01

    A rust inspection system that uses color analysis to find rust spots on drums has been developed. The system is composed of high-resolution color video equipment that permits the inspection of rust spots on the order of 0.25 cm (0.1-in.) in diameter. Because of the modular nature of the system design, the use of open systems software (X11, etc.), the inspection system can be easily integrated into other environmental restoration and waste management programs. The inspection system represents an excellent platform for the integration of other color inspection and color image processing algorithms.

  3. Photographic copy of computer enhanced color photographic image. Photographer and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photographic copy of computer enhanced color photographic image. Photographer and computer draftsman unknown. Original photographic image located in the office of Modjeski and Masters, Consulting Engineers at 1055 St. Charles Avenue, New Orleans, LA 70130. COMPUTER ENHANCED COLOR PHOTOGRAPH SHOWING THE PROPOSED HUEY P. LONG BRIDGE WIDENING LOOKING FROM THE WEST BANK TOWARD THE EAST BANK. - Huey P. Long Bridge, Spanning Mississippi River approximately midway between nine & twelve mile points upstream from & west of New Orleans, Jefferson, Jefferson Parish, LA

  4. A novel quantum representation of color digital images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2017-02-01

    In this paper, we propose a novel quantum representation of color digital images (NCQI) in quantum computer. The freshly proposed quantum image representation uses the basis state of a qubit sequence to store the RGB value of each pixel. All pixels are stored into a normalized superposition state and can be operated simultaneously. Comparison results with the latest multi-channel representation for quantum image reveal that NCQI can achieve a quadratic speedup in quantum image preparation. Meanwhile, some NCQI-based image processing operations are discussed. Analyses and comparisons demonstrate that many color operations can be executed conveniently based on NCQI. Therefore, the proposed NCQI model is more flexible and better suited to carry out color quantum image processing.

  5. Image evaluation using a color visual difference predictor (CVDP)

    NASA Astrophysics Data System (ADS)

    Lian, Ming-Shih

    2001-06-01

    In order to automate the image evaluation task, an engineering model for predicting the visual differences of color images is developed. The present CVDP consists of a color appearance model, a set of contrast sensitivity functions, the modified cortex transform, and a multichannel interaction model for masking effects. Based ona pixel-by- pixel difference metric similar to the CIELAB color difference, the predictions of the simplified CVDP are found to correlate fairly with the psychophysical test results over 51 pairs of natural images with some detection failures. These failures can be eliminated by including additional image quality metrics: the clarity in the shadow and highlight areas and the graininess in the mid-tone areas. The modified model is found to be able to identify 55 percent of those visually indistinguishable image pairs. The preliminary results using the complete CVDP for selected image pairs indicate that the effects of masking introduce only little changes to the results of the simplified CVDP.

  6. a New Color Correction Method for Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.

    2015-04-01

    Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.

  7. Applying innovative stripes adaptive detection to three-dimensional measurement of color fringe profilometry

    NASA Astrophysics Data System (ADS)

    Jeffrey Kuo, Chung-Feng; Chang, Alvin; Joseph Kuo, Ping-Chen; Lee, Chi-Lung; Wu, Han-Cheng

    2016-12-01

    This study developed a 3D software and hardware measurement system, and proposes an innovative stripes adaptive detection algorithm. The fringe intensity is regulated automatically according to the reflection coefficient of different analytes, in order to avoid overexposure. For the measurement of the object in discontinuously changing height, a novel intensity difference coding unwrapping phase technology is used, thus overcoming the technological bottleneck of traditional phase unwrapping. In order to increase the measurement efficiency, the stripe pattern is combined with intensity coding pattern by three-channel color information, in order to generate an adaptive compound color stripe pattern. The measurement efficiency is increased by approximately two times compared with traditional gray stripe pattern. In order to increase the measurement accuracy, the uneven brightness is corrected by using brightness gain function. The three-channel intensity nonlinear response is corrected by cubic spline interpolation system response inverse function. The three-channel image is corrected by color cross-talk correction technology. The experiment proved that the system repeatability is 20 μm. The traditional phase-shifting profilometry is improved successfully, overcoming the technical measurement bottleneck of discontinuous change in the analyte height, so as to attain low cost, high measurement accuracy, efficiency and measurement reliability.

  8. Comparison of perceptual color spaces for natural image segmentation tasks

    NASA Astrophysics Data System (ADS)

    Correa-Tome, Fernando E.; Sanchez-Yanez, Raul E.; Ayala-Ramirez, Victor

    2011-11-01

    Color image segmentation largely depends on the color space chosen. Furthermore, spaces that show perceptual uniformity seem to outperform others due to their emulation of the human perception of color. We evaluate three perceptual color spaces, CIELAB, CIELUV, and RLAB, in order to determine their contribution to natural image segmentation and to identify the space that obtains the best results over a test set of images. The nonperceptual color space RGB is also included for reference purposes. In order to quantify the quality of resulting segmentations, an empirical discrepancy evaluation methodology is discussed. The Berkeley Segmentation Dataset and Benchmark is used in test series, and two approaches are taken to perform the experiments: supervised pixelwise classification using reference colors, and unsupervised clustering using k-means. A majority filter is used as a postprocessing stage, in order to determine its contribution to the result. Furthermore, a comparison of elapsed times taken by the required transformations is included. The main finding of our study is that the CIELUV color space outperforms the other color spaces in both discriminatory performance and computational speed, for the average case.

  9. Stereoscopic high-speed imaging using additive colors

    PubMed Central

    Sankin, Georgy N.; Piech, David; Zhong, Pei

    2012-01-01

    An experimental system for digital stereoscopic imaging produced by using a high-speed color camera is described. Two bright-field image projections of a three-dimensional object are captured utilizing additive-color backlighting (blue and red). The two images are simultaneously combined on a two-dimensional image sensor using a set of dichromatic mirrors, and stored for off-line separation of each projection. This method has been demonstrated in analyzing cavitation bubble dynamics near boundaries. This technique may be useful for flow visualization and in machine vision applications. PMID:22559533

  10. First Color Image From Viking Lander 1

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This color picture of Mars was taken July 21--the day following Viking l's successful landing on the planet. The local time on Mars is approximately noon. The view is southeast from the Viking. Orange-red surface materials cover most of the surface, apparently forming a thin veneer over darker bedrock exposed in patches, as in the lower right. The reddish surface materials may be limonite (hydrated ferric oxide). Such weathering products form on Earth in the presence of water and an oxidizing atmosphere. The sky has a reddish cast, probably due to scattering and reflection from reddish sediment suspended in the lower atmosphere. The scene was scanned three times by the spacecraft's camera number 2, through a different color filter each time. To assist in balancing the colors, a second picture was taken of z test chart mounted on the rear of the spacecraft. Color data for these patches were adjusted until the patches were an appropriate color of gray. The same calibration was then used for the entire scene.

  11. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  12. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  13. Color calculations for and perceptual assessment of computer graphic images

    SciTech Connect

    Meyer, G.W.

    1986-01-01

    Realistic image synthesis involves the modelling of an environment in accordance with the laws of physics and the production of a final simulation that is perceptually acceptable. To be considered a scientific endeavor, synthetic image generation should also include the final step of experimental verification. This thesis concentrates on the color calculations that are inherent in the production of the final simulation and on the perceptual assessment of the computer graphic images that result. The fundamental spectral sensitivity functions that are active in the human visual system are introduced and are used to address color-blindness issues in computer graphics. A digitally controlled color television monitor is employed to successfully implement both the Farnsworth Munsell 100 hues test and a new color vision test that yields more accurate diagnoses. Images that simulate color blind vision are synthesized and are used to evaluate color scales for data display. Gaussian quadrature is used with a set of opponent fundamental to select the wavelengths at which to perform synthetic image generation.

  14. Full-color holographic 3D imaging system using color optical scanning holography

    NASA Astrophysics Data System (ADS)

    Kim, Hayan; Kim, You Seok; Kim, Taegeun

    2016-06-01

    We propose a full-color holographic three-dimensional imaging system that composes a recording stage, a transmission and processing stage and reconstruction stage. In recording stage, color optical scanning holography (OSH) records the complex RGB holograms of an object. In transmission and processing stage, the recorded complex RGB holograms are transmitted to the reconstruction stage after conversion to off-axis RGB holograms. In reconstruction stage, the off-axis RGB holograms are reconstructed optically.

  15. Adaptive Ambient Illumination Based on Color Harmony Model

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ayano; Hirai, Keita; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi

    We investigated the relationship between ambient illumination and psychological effect by applying a modified color harmony model. We verified the proposed model by analyzing correlation between psychological value and modified color harmony score. Experimental results showed the possibility to obtain the best color for illumination using this model.

  16. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  17. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  18. Underwater color image segmentation method via RGB channel fusion

    NASA Astrophysics Data System (ADS)

    Xuan, Li; Mingjun, Zhang

    2017-02-01

    Aiming at the problem of low segmentation accuracy and high computation time by applying existing segmentation methods for underwater color images, this paper has proposed an underwater color image segmentation method via RGB color channel fusion. Based on thresholding segmentation methods to conduct fast segmentation, the proposed method relies on dynamic estimation of the optimal weights for RGB channel fusion to obtain the grayscale image with high foreground-background contrast and reaches high segmentation accuracy. To verify the segmentation accuracy of the proposed method, the authors have conducted various underwater comparative experiments. The experimental results demonstrate that the proposed method is robust to illumination, and it is superior to existing methods in terms of both segmentation accuracy and computation time. Moreover, a segmentation technique is proposed for image sequences for real-time autonomous underwater vehicle operations.

  19. False-color composite image of Raco, Michigan

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This image is a false color composite of Raco, Michigan, centered at 46.39 north latitude and 84.88 east longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area shown is approximately 20 kilometers by 50 kilometers. Raco is located at the eastern end of Michigan's upper peninsula, west of Sault Ste. Marie and south of Whitefish Bay on Lake Superior. In this color representation, darker areas in the image are smooth surfaces such as frozen lakes and other non-forested areas. The colors are related to the types of trees and the brightness is related to the amount of plant material covering the surface, called forest biomass. The Jet Propulsion Laboratory alternative photo number is P-43882.

  20. Multiple color-image authentication system using HSI color space and QR decomposition in gyrator domains

    NASA Astrophysics Data System (ADS)

    Rafiq Abuturab, Muhammad

    2016-06-01

    A new multiple color-image authentication system based on HSI (Hue-Saturation-Intensity) color space and QR decomposition in gyrator domains is proposed. In this scheme, original color images are converted from RGB (Red-Green-Blue) color spaces to HSI color spaces, divided into their H, S, and I components, and then obtained corresponding phase-encoded components. All the phase-encoded H, S, and I components are individually multiplied, and then modulated by random phase functions. The modulated H, S, and I components are convoluted into a single gray image with asymmetric cryptosystem. The resulting image is segregated into Q and R parts by QR decomposition. Finally, they are independently gyrator transformed to get their encoded parts. The encoded Q and R parts should be gathered without missing anyone for decryption. The angles of gyrator transform afford sensitive keys. The protocol based on QR decomposition of encoded matrix and getting back decoded matrix after multiplying matrices Q and R, enhances the security level. The random phase keys, individual phase keys, and asymmetric phase keys provide high robustness to the cryptosystem. Numerical simulation results demonstrate that this scheme is the superior than the existing techniques.

  1. Shadow detection in color aerial images based on HSI space and color attenuation relationship

    NASA Astrophysics Data System (ADS)

    Shi, Wenxuan; Li, Jie

    2012-12-01

    Many problems in image processing and computer vision arise from shadows in a single color aerial image. This article presents a new algorithm by which shadows are extracted from a single color aerial image. Apart from using the ratio value of the hue over the intensity in some state-of-the-art algorithms, this article introduces another ratio map, which is obtained by applying the saturation over the intensity. Candidate shadow and nonshadow regions are separated by applying Otus's thresholding method. The color attenuation relationship that describes the relationship between the attenuation of each color channel is derived from the Planck's blackbody irradiance law. For each region, the color attenuation relationship and other determination conditions are performed iteratively to segment it into smaller sub-regions and to identify whether each sub-region is a true shadow region. Compared with previous methods, the proposed algorithm presents better shadow detection accuracy in the images that contain some dark green lawn, river, or low brightness shadow regions. The experimental results demonstrate the advantage of the proposed algorithm.

  2. Color image reproduction based on multispectral and multiprimary imaging: experimental evaluation

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Teraji, Taishi; Ohsawa, Kenro; Uchiyama, Toshio; Motomura, Hideto; Murakami, Yuri; Ohyama, Nagaaki

    2001-12-01

    Multispectral imaging is significant technology for the acquisition and display of accurate color information. Natural color reproduction under arbitrary illumination becomes possible using spectral information of both image and illumination light. In addition, multiprimary color display, i.e., using more than three primary colors, has been also developed for the reproduction of expanded color gamut, and for discounting observer metamerism. In this paper, we present the concept for the multispectral data interchange for natural color reproduction, and the experimental results using 16-band multispectral camera and 6-primary color display. In the experiment, the accuracy of color reproduction is evaluated in CIE (Delta) Ea*b* for both image capture and display systems. The average and maximum (Delta) Ea*b* = 1.0 and 2.1 in 16-band mutispectral camera system, using Macbeth 24 color patches. In the six-primary color projection display, average and maximum (Delta) Ea*b* = 1.3 and 2.7 with 30 test colors inside the display gamut. Moreover, the color reproduction results with different spectral distributions but same CIE tristimulus value are visually compared, and it is confirmed that the 6-primary display gives improved agreement between the original and reproduced colors.

  3. Color constancy in natural scenes explained by global image statistics.

    PubMed

    Foster, David H; Amano, Kinjiro; Nascimento, Sérgio M C

    2006-01-01

    To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance.

  4. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  5. Peripheral visual response time to colored stimuli imaged on the horizontal meridian

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Gross, M. M.; Nylen, D.; Dawson, L. M.

    1974-01-01

    Two male observers were administered a binocular visual response time task to small (45 min arc), flashed, photopic stimuli at four dominant wavelengths (632 nm red; 583 nm yellow; 526 nm green; 464 nm blue) imaged across the horizontal retinal meridian. The stimuli were imaged at 10 deg arc intervals from 80 deg left to 90 deg right of fixation. Testing followed either prior light adaptation or prior dark adaptation. Results indicated that mean response time (RT) varies with stimulus color. RT is faster to yellow than to blue and green and slowest to red. In general, mean RT was found to increase from fovea to periphery for all four colors, with the curve for red stimuli exhibiting the most rapid positive acceleration with increasing angular eccentricity from the fovea. The shape of the RT distribution across the retina was also found to depend upon the state of light or dark adaptation. The findings are related to previous RT research and are discussed in terms of optimizing the color and position of colored displays on instrument panels.

  6. Color impact in visual attention deployment considering emotional images

    NASA Astrophysics Data System (ADS)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  7. Hyperspectral image super-resolution: a hybrid color mapping approach

    NASA Astrophysics Data System (ADS)

    Zhou, Jin; Kwan, Chiman; Budavari, Bence

    2016-07-01

    NASA has been planning a hyperspectral infrared imager mission which will provide global coverage using a hyperspectral imager with 60-m resolution. In some practical applications, such as special crop monitoring or mineral mapping, 60-m resolution may still be too coarse. There have been many pansharpening algorithms for hyperspectral images by fusing high-resolution (HR) panchromatic or multispectral images with low-resolution (LR) hyperspectral images. We propose an approach to generating HR hyperspectral images by fusing high spatial resolution color images with low spatial resolution hyperspectral images. The idea is called hybrid color mapping (HCM) and involves a mapping between a high spatial resolution color image and a low spatial resolution hyperspectral image. Several variants of the color mapping idea, including global, local, and hybrid, are proposed and investigated. It was found that the local HCM yielded the best performance. Comparison of the local HCM with >10 state-of-the-art algorithms using five performance metrics has been carried out using actual images from the air force and NASA. Although our HCM method does not require a point spread function (PSF), our results are comparable to or better than those methods that do require PSF. More importantly, our performance is better than most if not all methods that do not require PSF. After applying our HCM algorithm, not only the visual performance of the hyperspectral image has been significantly improved, but the target classification performance has also been improved. Another advantage of our technique is that it is very efficient and can be easily parallelized. Hence, our algorithm is very suitable for real-time applications.

  8. Local adaptation for body color in Drosophila americana

    PubMed Central

    Wittkopp, P J; Smith-Winberry, G; Arnold, L L; Thompson, E M; Cooley, A M; Yuan, D C; Song, Q; McAllister, B F

    2011-01-01

    Pigmentation is one of the most variable traits within and between Drosophila species. Much of this diversity appears to be adaptive, with environmental factors often invoked as selective forces. Here, we describe the geographic structure of pigmentation in Drosophila americana and evaluate the hypothesis that it is a locally adapted trait. Body pigmentation was quantified using digital images and spectrometry in up to 10 flies from each of 93 isofemale lines collected from 17 locations across the United States and found to correlate most strongly with longitude. Sequence variation at putatively neutral loci showed no evidence of population structure and was inconsistent with an isolation-by-distance model, suggesting that the pigmentation cline exists despite extensive gene flow throughout the species range, and is most likely the product of natural selection. In all other Drosophila species examined to date, dark pigmentation is associated with arid habitats; however, in D. americana, the darkest flies were collected from the most humid regions. To investigate this relationship further, we examined desiccation resistance attributable to an allele that darkens pigmentation in D. americana. We found no significant effect of pigmentation on desiccation resistance in this experiment, suggesting that pigmentation and desiccation resistance are not unequivocally linked in all Drosophila species. PMID:20606690

  9. Multichannel Linear Predictive Coding of Color Images,

    DTIC Science & Technology

    1984-01-01

    single- An alternative may of =oeling z(n,n) wmul output AniM , as described in 11,21, at me be to autoregressively model each channel average...being minimum shoulders Image with well definte tao.r&. The phase, where*6* dt-* d .~ ,s #% terminenst of a binary image of Fig. 2(d). howver. rinws

  10. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  11. Optical color-image encryption and synthesis using coherent diffractive imaging in the Fresnel domain.

    PubMed

    Chen, Wen; Chen, Xudong; Sheppard, Colin J R

    2012-02-13

    We propose a new method using coherent diffractive imaging for optical color-image encryption and synthesis in the Fresnel domain. An optical multiple-random-phase-mask encryption system is applied, and a strategy based on lateral translations of a phase-only mask is employed during image encryption. For the decryption, an iterative phase retrieval algorithm is applied to extract high-quality decrypted color images from diffraction intensity maps (i.e., ciphertexts). In addition, optical color-image synthesis is also investigated based on coherent diffractive imaging. Numerical results are presented to demonstrate feasibility and effectiveness of the proposed method. Compared with conventional interference methods, coherent diffractive imaging approach may open up a new research perspective or can provide an effective alternative for optical color-image encryption and synthesis.

  12. Digital watermarking for color images in hue-saturation-value color space

    NASA Astrophysics Data System (ADS)

    Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.

    2014-05-01

    This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.

  13. Color correction with blind image restoration based on multiple images using a low-rank model

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  14. Ocean color products from the Korean Geostationary Ocean Color Imager (GOCI).

    PubMed

    Wang, Menghua; Ahn, Jae-Hyun; Jiang, Lide; Shi, Wei; Son, SeungHyun; Park, Young-Je; Ryu, Joo-Hyung

    2013-02-11

    The first geostationary ocean color satellite sensor, Geostationary Ocean Color Imager (GOCI), which is onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS), was successfully launched in June of 2010. GOCI has a local area coverage of the western Pacific region centered at around 36°N and 130°E and covers ~2500 × 2500 km(2). GOCI has eight spectral bands from 412 to 865 nm with an hourly measurement during daytime from 9:00 to 16:00 local time, i.e., eight images per day. In a collaboration between NOAA Center for Satellite Applications and Research (STAR) and Korea Institute of Ocean Science and Technology (KIOST), we have been working on deriving and improving GOCI ocean color products, e.g., normalized water-leaving radiance spectra (nLw(λ)), chlorophyll-a concentration, diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)), etc. The GOCI-covered ocean region includes one of the world's most turbid and optically complex waters. To improve the GOCI-derived nLw(λ) spectra, a new atmospheric correction algorithm was developed and implemented in the GOCI ocean color data processing. The new algorithm was developed specifically for GOCI-like ocean color data processing for this highly turbid western Pacific region. In this paper, we show GOCI ocean color results from our collaboration effort. From in situ validation analyses, ocean color products derived from the new GOCI ocean color data processing have been significantly improved. Generally, the new GOCI ocean color products have a comparable data quality as those from the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua. We show that GOCI-derived ocean color data can provide an effective tool to monitor ocean phenomenon in the region such as tide-induced re-suspension of sediments, diurnal variation of ocean optical and biogeochemical properties, and horizontal advection of river discharge. In particular, we show some examples of ocean

  15. High resolution reversible color images on photonic crystal substrates.

    PubMed

    Kang, Pilgyu; Ogunbo, Samuel O; Erickson, David

    2011-08-16

    When light is incident on a crystalline structure with appropriate periodicity, some colors will be preferentially reflected (Joannopoulos, J. D.; Meade, R. D.; Winn, J. N. Photonic crystals: molding the flow of light; Princeton University Press: Princeton, NJ, 1995; p ix, 137 pp). These photonic crystals and the structural color they generate represent an interesting method for creating reflective displays and drawing devices, since they can achieve a continuous color response and do not require back lighting (Joannopoulos, J. D.; Villeneuve, P. R.; Fan, S. H. Photonic crystals: Putting a new twist on light. Nature 1997, 386, 143-149; Graham-Rowe, D. Tunable structural colour. Nat. Photonics 2009, 3, 551-553.; Arsenault, A. C.; Puzzo, D. P.; Manners, I.; Ozin, G. A. Photonic-crystal full-colour displays. Nat. Photonics 2007, 1, 468-472; Walish, J. J.; Kang, Y.; Mickiewicz, R. A.; Thomas, E. L. Bioinspired Electrochemically Tunable Block Copolymer Full Color Pixels. Adv. Mater.2009, 21, 3078). Here we demonstrate a technique for creating erasable, high-resolution, color images using otherwise transparent inks on self-assembled photonic crystal substrates (Fudouzi, H.; Xia, Y. N. Colloidal crystals with tunable colors and their use as photonic papers. Langmuir 2003, 19, 9653-9660). Using inkjet printing, we show the ability to infuse fine droplets of silicone oils into the crystal, locally swelling it and changing the reflected color (Sirringhaus, H.; Kawase, T.; Friend, R. H.; Shimoda, T.; Inbasekaran, M.; Wu, W.; Woo, E. P. High-resolution inkjet printing of all-polymer transistor circuits. Science 2000, 290, 2123-2126). Multicolor images with resolutions as high as 200 μm are obtained from oils of different molecular weights with the lighter oils being able to penetrate deeper, yielding larger red shifts. Erasing of images is done simply by adding a low vapor pressure oil which dissolves the image, returning the substrate to its original state.

  16. Color Doppler Imaging of Cardiac Catheters Using Vibrating Motors

    PubMed Central

    Reddy, Kalyan E.; Light, Edward D.; Rivera, Danny J.; Kisslo, Joseph A.; Smith, Stephen W.

    2010-01-01

    We attached a miniature motor rotating at 11,000 rpm onto the proximal end of cardiac electrophysiological (EP) catheters in order to produce vibrations at the tip which were then visualized by color Doppler on ultrasound scanners. We imaged the catheter tip within a vascular graft submerged in a water tank using the Volumetrics Medical Imaging 3D scanner, the Siemens Sonoline Antares 2D scanner, and the Philips ie33 3D ultrasound scanner with TEE probe. The vibrating catheter tip was visualized in each case though results varied with the color Doppler properties of the individual scanner. PMID:19514134

  17. Prediction of object detection, recognition, and identification [DRI] ranges at color scene images based on quantifying human color contrast perception

    NASA Astrophysics Data System (ADS)

    Pinsky, Ephi; Levin, Ilia; Yaron, Ofer

    2016-10-01

    We propose a novel approach to predict, for specified color imaging system and for objects with known characteristics, their detection, recognition, identification (DRI) ranges in a colored dynamic scene, based on quantifying the human color contrast perception. The method refers to the well established L*a*b*, 3D color space. The nonlinear relations of this space are intended to mimic the nonlinear response of the human eye. The metrics of L*a*b* color space is such that the Euclidian distance between any two colors in this space is approximately proportional to the color contrast as perceived by the human eye/brain. The result of this metrics leads to the outcome that color contrast of any two points is always greater (or equal) than their equivalent grey scale contrast. This meets our sense that looking on a colored image, contrast is superior to the gray scale contrast of the same image. Yet, color loss by scattering at very long ranges should be considered as well. The color contrast derived from the distance between the colored object pixels and to the nearby colored background pixels, as derived from the L*a*b* color space metrics, is expressed in terms of gray scale contrast. This contrast replaces the original standard gray scale contrast component of that image. As expected, the resulted DRI ranges are, in most cases, larger than those predicted by the standard gray scale image. Upon further elaboration and validation of this method, it may be combined with the next versions of the well accepted TRM codes for DRI predictions. Consistent prediction of DRI ranges implies a careful evaluation of the object and background color contrast reduction along the range. Clearly, additional processing for reconstructing the objects and background true colors and hence the color contrast along the range, will further increase the DRI ranges.

  18. ALISA: adaptive learning image and signal analysis

    NASA Astrophysics Data System (ADS)

    Bock, Peter

    1999-01-01

    ALISA (Adaptive Learning Image and Signal Analysis) is an adaptive statistical learning engine that may be used to detect and classify the surfaces and boundaries of objects in images. The engine has been designed, implemented, and tested at both the George Washington University and the Research Institute for Applied Knowledge Processing in Ulm, Germany over the last nine years with major funding from Robert Bosch GmbH and Lockheed-Martin Corporation. The design of ALISA was inspired by the multi-path cortical- column architecture and adaptive functions of the mammalian visual cortex.

  19. Automated assessment of the quality of diffusion tensor imaging data using color cast of color-encoded fractional anisotropy images.

    PubMed

    He, Xiaofu; Liu, Wei; Li, Xuzhou; Li, Qingli; Liu, Feng; Rauh, Virginia A; Yin, Dazhi; Bansal, Ravi; Duan, Yunsuo; Kangarlu, Alayar; Peterson, Bradley S; Xu, Dongrong

    2014-06-01

    Diffusion tensor imaging (DTI) data often suffer from artifacts caused by motion. These artifacts are especially severe in DTI data from infants, and implementing tight quality controls is therefore imperative for DTI studies of infants. Currently, routine procedures for quality assurance of DTI data involve the slice-wise visual inspection of color-encoded, fractional anisotropy (CFA) images. Such procedures often yield inconsistent results across different data sets, across different operators who are examining those data sets, and sometimes even across time when the same operator inspects the same data set on two different occasions. We propose a more consistent, reliable, and effective method to evaluate the quality of CFA images automatically using their color cast, which is calculated on the distribution statistics of the 2D histogram in the color space as defined by the International Commission on Illumination (CIE) on lightness and a and b (LAB) for the color-opponent dimensions (also known as the CIELAB color space) of the images. Experimental results using DTI data acquired from neonates verified that this proposed method is rapid and accurate. The method thus provides a new tool for real-time quality assurance for DTI data.

  20. Digital images for eternity: color microfilm as archival medium

    NASA Astrophysics Data System (ADS)

    Normand, C.; Gschwind, R.; Fornaro, P.

    2007-01-01

    In the archiving and museum communities, the long-term preservation of artworks has traditionally been guaranteed by making duplicates of the original. For photographic reproductions, digital imaging devices have now become standard, providing better quality control and lower costs than film photography. However, due to the very short life cycle of digital data, losses are unavoidable without repetitive data migrations to new file formats and storage media. We present a solution for the long-term archiving of digital images on color microfilm (Ilfochrome® Micrographic). This extremely stable and high-resolution medium, combined with the use of a novel laser film recorder is particularly well suited for this task. Due to intrinsic limitations of the film, colorimetric reproductions of the originals are not always achievable. The microfilm must be first considered as an information carrier and not primarily as an imaging medium. Color transformations taking into account the film characteristics and possible degradations of the medium due to aging are investigated. An approach making use of readily available color management tools is presented which assures the recovery of the original colors after re-digitization. An extension of this project considering the direct recording of digital information as color bit-code on the film is also introduced.

  1. Preparing Colorful Astronomical Images III: Cosmetic Cleaning

    NASA Astrophysics Data System (ADS)

    Frattare, L. M.; Levay, Z. G.

    2003-12-01

    We present cosmetic cleaning techniques for use with mainstream graphics software (Adobe Photoshop) to produce presentation-quality images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope when producing photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to discuss the treatment of various detector-attributed artifacts such as cosmic rays, chip seams, gaps, optical ghosts, diffraction spikes and the like. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to final presentation images. Other pixel-to-pixel applications such as filter smoothing and global noise reduction will be discussed.

  2. Accurate color images: from expensive luxury to essential resource

    NASA Astrophysics Data System (ADS)

    Saunders, David R.; Cupitt, John

    2002-06-01

    Over ten years ago the National Gallery in London began a program to make digital images of paintings in the collection using a colorimetric imaging system. This was to provide a permanent record of the state of paintings against which future images could be compared to determine if any changes had occurred. It quickly became apparent that such images could be used not only for scientific purposes, but also in applications where transparencies were then being used, for example as source materials for printed books and catalogues or for computer-based information systems. During the 1990s we were involved in the development of a series of digital cameras that have combined the high color accuracy of the original 'scientific' imaging system with the familiarity and portability of a medium format camera. This has culminated in the program of digitization now in progress at the National Gallery. By the middle of 2001 we will have digitized all the major paintings in the collection at a resolution of 10,000 pixels along their longest dimension and with calibrated color; we are on target to digitize the whole collection by the end of 2002. The images are available on-line within the museum for consultation and so that Gallery departments can use the images in printed publications and on the Gallery's web- site. We describe the development of the imaging systems used at National Gallery and how the research we have conducted into high-resolution accurate color imaging has developed from being a peripheral, if harmless, research activity to becoming a central part of the Gallery's information and publication strategy. Finally, we discuss some outstanding issues, such as interfacing our color management procedures with the systems used by external organizations.

  3. Color Contrast Metrics for Complex Images

    DTIC Science & Technology

    1986-09-01

    viewer who is focused at infinity sees an image of the outside world with 4 the computer-generated imagery " watercolored " upon it. The reader has...raster is 2:1 positively interlaced and paints a complete image once every 1/30 s. The aspect ratio was adjusted to 1:1, yielding a (26 cm)2 -live...required approximately 5 s. Presentation of the HUDBACKs to subjects was synchronized with the monitor’s raster so that they were always painted

  4. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  5. Fixation light hue bias revisited: implications for using adaptive optics to study color vision.

    PubMed

    Hofer, H J; Blaschke, J; Patolia, J; Koenig, D E

    2012-03-01

    Current vision science adaptive optics systems use near infrared wavefront sensor 'beacons' that appear as red spots in the visual field. Colored fixation targets are known to influence the perceived color of macroscopic visual stimuli (Jameson, D., & Hurvich, L. M. (1967). Fixation-light bias: An unwanted by-product of fixation control. Vision Research, 7, 805-809.), suggesting that the wavefront sensor beacon may also influence perceived color for stimuli displayed with adaptive optics. Despite its importance for proper interpretation of adaptive optics experiments on the fine scale interaction of the retinal mosaic and spatial and color vision, this potential bias has not yet been quantified or addressed. Here we measure the impact of the wavefront sensor beacon on color appearance for dim, monochromatic point sources in five subjects. The presence of the beacon altered color reports both when used as a fixation target as well as when displaced in the visual field with a chromatically neutral fixation target. This influence must be taken into account when interpreting previous experiments and new methods of adaptive correction should be used in future experiments using adaptive optics to study color.

  6. Fuzzy logic color detection: Blue areas in melanoma dermoscopy images.

    PubMed

    Lingala, Mounika; Stanley, R Joe; Rader, Ryan K; Hagerty, Jason; Rabinovitz, Harold S; Oliviero, Margaret; Choudhry, Iqra; Stoecker, William V

    2014-07-01

    Fuzzy logic image analysis techniques were used to analyze three shades of blue (lavender blue, light blue, and dark blue) in dermoscopic images for melanoma detection. A logistic regression model provided up to 82.7% accuracy for melanoma discrimination for 866 images. With a support vector machines (SVM) classifier, lower accuracy was obtained for individual shades (79.9-80.1%) compared with up to 81.4% accuracy with multiple shades. All fuzzy blue logic alpha cuts scored higher than the crisp case. Fuzzy logic techniques applied to multiple shades of blue can assist in melanoma detection. These vector-based fuzzy logic techniques can be extended to other image analysis problems involving multiple colors or color shades.

  7. Fast spectral color image segmentation based on filtering and clustering

    NASA Astrophysics Data System (ADS)

    Xing, Min; Li, Hongyu; Jia, Jinyuan; Parkkinen, Jussi

    2009-10-01

    This paper proposes a fast approach to spectral image segmentation. In the algorithm, two popular techniques are extended and applied to spectral color images: the mean-shift filtering and the kernel-based clustering. We claim that segmentation should be completed under illuminant F11 rather than directly using the original spectral reflectance, because such illumination can reduce data variability and expedite the following filtering. The modes obtained in the mean-shift filtering represent the local features of spectral images, and will be applied to segmentation in place of pixels. Since the modes are generally small in number, the eigendecomposition of kernel matrices, the crucial step in the kernelbased clustering, becomes much easier. The combination of these two techniques can efficiently enhance the performance of segmentation. Experiments show that the proposed segmentation method is feasible and very promising for spectral color images.

  8. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  9. Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.

    2016-02-01

    Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.

  10. Adapted polarization state contrast image.

    PubMed

    Richert, Michael; Orlik, Xavier; De Martino, Antonello

    2009-08-03

    We propose a general method to maximize the polarimetric contrast between an object and its background using a predetermined illumination polarization state. After a first estimation of the polarimetric properties of the scene by classical Mueller imaging, we evaluate the incident polarized field that induces scattered polarization states by the object and background, as opposite as possible on the Poincar e sphere. With a detection method optimized for a 2-channel imaging system, Monte Carlo simulations of low flux coherent imaging are performed with various objects and backgrounds having different properties of retardance, dichroism and depolarization. With respect to classical Mueller imaging, possibly associated to the polar decomposition, our results show a noticeable increase in the Bhattacharyya distance used as our contrast parameter.

  11. Color-coded visualization of magnetic resonance imaging multiparametric maps

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.

  12. Teachers of Color Speak to Issues of Respect and Image.

    ERIC Educational Resources Information Center

    Gordon, June A.

    1997-01-01

    Explores the issue of the respectability and changing image of public school teachers through interviews with 114 teachers of color in three urban school districts: Cincinnati (0hio); Seattle (Washington); and Long Beach (California). A strong consensus was held that the dominant society devalues teaching, resulting in fewer people of color…

  13. Multi-color magnetic particle imaging for cardiovascular interventions.

    PubMed

    Haegele, Julian; Vaalma, Sarah; Panagiotopoulos, Nikolaos; Barkhausen, Jörg; Vogt, Florian M; Borgert, Jörn; Rahmer, Jürgen

    2016-08-21

    Magnetic particle imaging (MPI) uses magnetic fields to visualize the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). Guidance of cardiovascular interventions is seen as one possible application of MPI. To safely guide interventions, the vessel lumen as well as all required interventional devices have to be visualized and be discernible from each other. Until now, different tracer concentrations were used for discerning devices from blood in MPI, because only one type of SPIO could be imaged at a time. Recently, it was shown for 3D MPI that it is possible to separate different signal sources in one volume of interest, i.e. to visualize and discern different SPIOs or different binding states of the same SPIO. The approach was termed multi-color MPI. In this work, the use of multi-color MPI for differentiation of a SPIO coated guide wire (Terumo Radifocus 0.035″) from the lumen of a vessel phantom filled with diluted Resovist is demonstrated. This is achieved by recording dedicated system functions of the coating material containing solid Resovist and of liquid Resovist, which allows separation of their respective signal in the image reconstruction process. Assigning a color to the different signal sources results in a differentiation of guide wire and vessel phantom lumen into colored images.

  14. Hyperspectral imaging using RGB color for foodborne pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance sp...

  15. Multi-color magnetic particle imaging for cardiovascular interventions

    NASA Astrophysics Data System (ADS)

    Haegele, Julian; Vaalma, Sarah; Panagiotopoulos, Nikolaos; Barkhausen, Jörg; Vogt, Florian M.; Borgert, Jörn; Rahmer, Jürgen

    2016-08-01

    Magnetic particle imaging (MPI) uses magnetic fields to visualize the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). Guidance of cardiovascular interventions is seen as one possible application of MPI. To safely guide interventions, the vessel lumen as well as all required interventional devices have to be visualized and be discernible from each other. Until now, different tracer concentrations were used for discerning devices from blood in MPI, because only one type of SPIO could be imaged at a time. Recently, it was shown for 3D MPI that it is possible to separate different signal sources in one volume of interest, i.e. to visualize and discern different SPIOs or different binding states of the same SPIO. The approach was termed multi-color MPI. In this work, the use of multi-color MPI for differentiation of a SPIO coated guide wire (Terumo Radifocus 0.035″) from the lumen of a vessel phantom filled with diluted Resovist is demonstrated. This is achieved by recording dedicated system functions of the coating material containing solid Resovist and of liquid Resovist, which allows separation of their respective signal in the image reconstruction process. Assigning a color to the different signal sources results in a differentiation of guide wire and vessel phantom lumen into colored images.

  16. Color-coded visualization of magnetic resonance imaging multiparametric maps

    PubMed Central

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data. PMID:28112222

  17. Color Image of Phoenix Heat Shield and Bounce Mark

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This shows a color image from Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment camera. It shows the Phoenix heat shield and bounce mark on the Mars surface.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. Use of discrete chromatic space to tune the image tone in a color image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  19. A grayscale image color transfer method based on region texture analysis using GLCM

    NASA Astrophysics Data System (ADS)

    Zhao, Yuanmeng; Wang, Lingxue; Jin, Weiqi; Luo, Yuan; Li, Jiakun

    2011-08-01

    In order to improve the performance of grayscale image colorization based on color transfer, this paper proposes a novel method by which pixels are matched accurately between images through region texture analysis using Gray Level Co-occurrence Matrix (GLCM). This method consists of six steps: reference image selection, color space transformation, grayscale linear transformation and compression, texture analysis using GLCM, pixel matching through texture value comparison, and color value transfer between pixels. We applied this method to kinds of grayscale images, and they gained natural color appearance like the reference images. Experimental results proved that this method is more effective than conventional method in accurately transferring color to grayscale images.

  20. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  1. Extreme Adaptive Optics Planet Imager

    NASA Astrophysics Data System (ADS)

    Macintosh, B.; Graham, J. R.; Ghez, A.; Kalas, P.; Lloyd, J.; Makidon, R.; Olivier, S.; Patience, J.; Perrin, M.; Poyneer, L.; Severson, S.; Sheinis, A.; Sivaramakrishnan, A.; Troy, M.; Wallace, J.; Wilhelmsen, J.

    2002-12-01

    Direct detection of photons emitted or reflected by extrasolar planets is the next major step in extrasolar planet studies. Current adaptive optics (AO) systems, with <300 subapertures and Strehl ratio 0.4-0.7, can achieve contrast levels of 106 at 2" separations; this is sufficient to see very young planets in wide orbits but insufficient to detect solar systems more like our own. Contrast levels of 107 - 108 in the near-IR are needed to probe a significant part of the extrasolar planet phase space. The NSF Center for Adaptive Optics is carrying out a design study for a dedicated ultra-high-contrast "Extreme" adaptive optics system for an 8-10m telescope. With 3000 controlled subapertures it should achieve Strehl ratios > 0.9 in the near-IR. Using a spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused static errors. We predict that it will achieve contrast levels of 107-108 around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. The system will be capable of a variety of high-contrast science including studying circumstellar dust disks at densities a factor of 10-100 lower than currently feasible and a systematic inventory of other solar systems on 10-100 AU scale. This work was supported by the NSF Science and Technology Center for Adaptive Optics, managed by UC Santa Cruz under AST-9876783. Portions of this work was performed under the auspices of the U.S. Department of Energy, under contract No. W-7405-Eng-48.

  2. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  3. Digital image modification detection using color information and its histograms.

    PubMed

    Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na

    2016-09-01

    The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring.

  4. Offset-sparsity decomposition for automated enhancement of color microscopic image of stained specimen in histopathology

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Hadžija, Marijana Popović; Hadžija, Mirko; Aralica, Gorana

    2015-07-01

    We propose an offset-sparsity decomposition method for the enhancement of a color microscopic image of a stained specimen. The method decomposes vectorized spectral images into offset terms and sparse terms. A sparse term represents an enhanced image, and an offset term represents a "shadow." The related optimization problem is solved by computational improvement of the accelerated proximal gradient method used initially to solve the related rank-sparsity decomposition problem. Removal of an image-adapted color offset yields an enhanced image with improved colorimetric differences among the histological structures. This is verified by a no-reference colorfulness measure estimated from 35 specimens of the human liver, 1 specimen of the mouse liver stained with hematoxylin and eosin, 6 specimens of the mouse liver stained with Sudan III, and 3 specimens of the human liver stained with the anti-CD34 monoclonal antibody. The colorimetric difference improves on average by 43.86% with a 99% confidence interval (CI) of [35.35%, 51.62%]. Furthermore, according to the mean opinion score, estimated on the basis of the evaluations of five pathologists, images enhanced by the proposed method exhibit an average quality improvement of 16.60% with a 99% CI of [10.46%, 22.73%].

  5. Physically motivated enhancement of color images for fiber endoscopy.

    PubMed

    Winter, Christian; Zerfass, Thorsten; Elter, Matthias; Rupp, Stephan; Wittenberg, Thomas

    2007-01-01

    Fiber optics are widely used in flexible endoscopes which are indispensable for many applications in diagnosis and therapy. Computer-aided use of fiberscopes requires a digital sensor mounted at the proximal end. Most commercially available cameras for endoscopy provide the images by means of a regular grid of color filters what is known as the Bayer Pattern. Hence, the images suffer from false colored spatial moiré, which is further stressed by the downgrading fiber optic transmission yielding a honey comb pattern. To solve this problem we propose a new approach that extends the interpolation between known intensities of registered fibers to multi channel color applications. The inventive idea takes into account both the Gaussian intensity distribution of each fiber and the physical color distribution of the Bayer pattern. Individual color factors for interpolation of each fiber area make it possible to simultaneously remove both the comb structure from the fiber bundle as well as the Bayer pattern mosaicking from the sensor while preserving depicted structures and textures in the scene.

  6. Restoration of color images by multichannel Kalman filtering

    NASA Technical Reports Server (NTRS)

    Galatsanos, Nikolas P.; Chin, Roland T.

    1991-01-01

    A Kalman filter for optimal restoration of multichannel images is presented. This filter is derived using a multichannel semicausal image model that includes between-channel degradation. Both stationary and nonstationary image models are developed. This filter is implemented in the Fourier domain and computation is reduced from O(Lambda3N3M4) to O(Lambda3N3M2) for an M x M N-channel image with degradation length Lambda. Color (red, green, and blue (RGB)) images are used as examples of multichannel images, and restoration in the RGB and YIQ domains is investigated. Simulations are presented in which the effectiveness of this filter is tested for different types of degradation and different image model estimates.

  7. Automated detection of changes in sequential color ocular fundus images

    NASA Astrophysics Data System (ADS)

    Sakuma, Satoshi; Nakanishi, Tadashi; Takahashi, Yasuko; Fujino, Yuichi; Tsubouchi, Tetsuro; Nakanishi, Norimasa

    1998-06-01

    A recent trend is the automatic screening of color ocular fundus images. The examination of such images is used in the early detection of several adult diseases such as hypertension and diabetes. Since this type of examination is easier than CT, costs less, and has no harmful side effects, it will become a routine medical examination. Normal ocular fundus images are found in more than 90% of all people. To deal with the increasing number of such images, this paper proposes a new approach to process them automatically and accurately. Our approach, based on individual comparison, identifies changes in sequential images: a previously diagnosed normal reference image is compared to a non- diagnosed image.

  8. Image-Specific Prior Adaptation for Denoising.

    PubMed

    Lu, Xin; Lin, Zhe; Jin, Hailin; Yang, Jianchao; Wang, James Z

    2015-12-01

    Image priors are essential to many image restoration applications, including denoising, deblurring, and inpainting. Existing methods use either priors from the given image (internal) or priors from a separate collection of images (external). We find through statistical analysis that unifying the internal and external patch priors may yield a better patch prior. We propose a novel prior learning algorithm that combines the strength of both internal and external priors. In particular, we first learn a generic Gaussian mixture model from a collection of training images and then adapt the model to the given image by simultaneously adding additional components and refining the component parameters. We apply this image-specific prior to image denoising. The experimental results show that our approach yields better or competitive denoising results in terms of both the peak signal-to-noise ratio and structural similarity.

  9. Modeling of display color parameters and algorithmic color selection

    NASA Astrophysics Data System (ADS)

    Silverstein, Louis D.; Lepkowski, James S.; Carter, Robert C.; Carter, Ellen C.

    1986-01-01

    An algorithmic approach to color selection, which is based on psychophysical models of color processing, is described. The factors that affect color differentiation, such as wavelength separation, color stimulus size, and brightness adaptation level, are discussed. The use of the CIE system of colorimetry and the CIELUV color difference metric for display color modeling is examined. The computer program combines the selection algorithm with internally derived correction factors for color image field size, ambient lighting characteristics, and anomalous red-green color vision deficiencies of display operators. The performance of the program is evaluated and uniform chromaticity scale diagrams for six-color and seven-color selection problems are provided.

  10. 78 FR 18611 - Summit on Color in Medical Imaging; Cosponsored Public Workshop; Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-27

    ... HUMAN SERVICES Food and Drug Administration Summit on Color in Medical Imaging; Cosponsored Public... International Color Consortium (ICC) are announcing the following public workshop entitled ``Summit on Color in... Approaches for Dealing with Color in Medical Images.'' The purpose of the workshop is to bring together...

  11. Adaptive optics imaging of the retina

    PubMed Central

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified. PMID:24492503

  12. Adaptive optics imaging of the retina.

    PubMed

    Battu, Rajani; Dabir, Supriya; Khanna, Anjani; Kumar, Anupama Kiran; Roy, Abhijit Sinha

    2014-01-01

    Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO) and American Academy of Ophthalmology (AAO) meetings. In total, 261 relevant publications and 389 conference abstracts were identified.

  13. Quantum image encryption based on restricted geometric and color transformations

    NASA Astrophysics Data System (ADS)

    Song, Xian-Hua; Wang, Shen; Abd El-Latif, Ahmed A.; Niu, Xia-Mu

    2014-08-01

    A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.

  14. Color voltage contrast: A new method of implementing fault contrast with color imaging software

    SciTech Connect

    Colvin, J.

    1995-12-31

    Although voltage contrast and fault contrast methods are well established, the current methods of implementation are frequently tedious. A new method of mapping voltage contrast (VC) images in a qualitative (stroboscopic) color mode allows multiple logic states to be simultaneously viewed and updated in color. A shortcoming of image subtraction is that only one direction of logic change is represented unless the frames are exclusive OR`ed together. Although this gives fault information it does not include the VC of neighboring unchanged nodes. When tracking failures such as a saturated transistor resulting from a logic short somewhere else, all logic states; both static and transitional need to be understood and viewed simultaneously if an expedient analysis is desired.

  15. Tracking of multiple points using color video image analyzer

    NASA Astrophysics Data System (ADS)

    Nennerfelt, Leif

    1990-08-01

    The Videomex-X is a new product intended for use in biomechanical measurement. It tracks up to six points at 60 frames per second using colored markers placed on the subject. The system can be used for applications such as gait analysis, studying facial movements, or tracking the pattern of movements of individuals in a group. The Videomex-X is comprised of a high speed color image analyzer, an RBG color video camera, an IBM AT compatible computer and motion analysis software. The markers are made from brightly colored plastic disks and each marker is a different color. Since the markers are unique, the problem of misidentification of markers does not occur. The Videomex-X performs realtime analysis so that the researcher can get immediate feedback on the subject's performance. High speed operation is possible because the system uses distributed processing. The image analyzer is a hardwired parallel image processor which identifies the markers within the video picture and computes their x-y locations. The image analyzer sends the x-y coordinates to the AT computer which performs additional analysis and presents the result. The x-y coordinate data acquired during the experiment may be streamed to the computer's hard disk. This allows the data to be re-analyzed repeatedly using different analysis criteria. The original Videomex-X tracked in two dimensions. However, a 3-D system has recently been completed. The algorithm used by the system to derive performance results from the x-y coordinates is contained in a separate ASCII file. These files can be modified by the operator to produce the required type of data reduction.

  16. Thermal adaptiveness of plumage color in screech owls

    USGS Publications Warehouse

    Mosher, J.A.; Henny, C.J.

    1976-01-01

    We measured oxygen consumption rates of 8 Screech? Owls (4 red and 4 gray phase) at 4 environmental temperatures, -10?, -5?, 5?, and 15?C. These data demonstrated a significant difference in oxygen uptake between color phases at -10? and -5?C. This supports our hypothesis that red phase Screech Owls are restricted in their northern distribution by color-related metabolic differences from the gray phase birds. The problems of low red phase occurrence in the Gulf Coast states and their absence from the Western states remain to be studied.

  17. Color handling in the image retrieval system Imagine

    NASA Astrophysics Data System (ADS)

    Dal Degan, Neviano; Lancini, Rosa C.; Migliorati, Pier A.; Pozzi, Stefano

    1991-11-01

    The paper presents the main features of a prototype image retrieval system, nicknamed Imagine. In this system, the image database is located in a site remote from the user workstation. The key issues in developing the prototype have been the response time and scalability, or the ability of maintaining a set of basic functionalities in a wide range of workstation performances and network digital rates. The paper focuses on the problems related to the image visualization process in a workstation with a limited number of reproducible colors. Three different approaches, split, shared, and generic colormap, are presented and compared.

  18. Local image registration by adaptive filtering.

    PubMed

    Caner, Gulcin; Tekalp, A Murat; Sharma, Gaurav; Heinzelman, Wendi

    2006-10-01

    We propose a new adaptive filtering framework for local image registration, which compensates for the effect of local distortions/displacements without explicitly estimating a distortion/displacement field. To this effect, we formulate local image registration as a two-dimensional (2-D) system identification problem with spatially varying system parameters. We utilize a 2-D adaptive filtering framework to identify the locally varying system parameters, where a new block adaptive filtering scheme is introduced. We discuss the conditions under which the adaptive filter coefficients conform to a local displacement vector at each pixel. Experimental results demonstrate that the proposed 2-D adaptive filtering framework is very successful in modeling and compensation of both local distortions, such as Stirmark attacks, and local motion, such as in the presence of a parallax field. In particular, we show that the proposed method can provide image registration to: a) enable reliable detection of watermarks following a Stirmark attack in nonblind detection scenarios, b) compensate for lens distortions, and c) align multiview images with nonparametric local motion.

  19. Multi-color magnetic nanoparticle imaging using magnetorelaxometry

    NASA Astrophysics Data System (ADS)

    Coene, A.; Leliaert, J.; Liebl, M.; Löwa, N.; Steinhoff, U.; Crevecoeur, G.; Dupré, L.; Wiekhorst, F.

    2017-04-01

    Magnetorelaxometry (MRX) is a well-known measurement technique which allows the retrieval of magnetic nanoparticle (MNP) characteristics such as size distribution and clustering behavior. This technique also enables the non-invasive reconstruction of the spatial MNP distribution by solving an inverse problem, referred to as MRX imaging. Although MRX allows the imaging of a broad range of MNP types, little research has been done on imaging different MNP types simultaneously. Biomedical applications can benefit significantly from a measurement technique that allows the separation of the resulting measurement signal into its components originating from different MNP types. In this paper, we present a theoretical procedure and experimental validation to show the feasibility of MRX imaging in reconstructing multiple MNP types simultaneously. Because each particle type has its own characteristic MRX signal, it is possible to take this a priori information into account while solving the inverse problem. This way each particle type’s signal can be separated and its spatial distribution reconstructed. By assigning a unique color code and intensity to each particle type’s signal, an image can be obtained in which each spatial distribution is depicted in the resulting color and with the intensity measuring the amount of particles of that type, hence the name multi-color MNP imaging. The theoretical procedure is validated by reconstructing six phantoms, with different spatial arrangements of multiple MNP types, using MRX imaging. It is observed that MRX imaging easily allows up to four particle types to be separated simultaneously, meaning their quantitative spatial distributions can be obtained.

  20. Demonstrating Hormonal Control of Vertebrate Adaptive Color Changes in Vitro.

    ERIC Educational Resources Information Center

    Hadley, Mac E.; Younggren, Newell A.

    1980-01-01

    Presented is a short discussion of factors causing color changes in animals. Also described is an activity which may be used to demonstrate the response of amphibian skin to a melanophore stimulating hormone in high school or college biology classes. (PEB)

  1. Adaptive sigmoid function bihistogram equalization for image contrast enhancement

    NASA Astrophysics Data System (ADS)

    Arriaga-Garcia, Edgar F.; Sanchez-Yanez, Raul E.; Ruiz-Pinales, Jose; Garcia-Hernandez, Ma. de Guadalupe

    2015-09-01

    Contrast enhancement plays a key role in a wide range of applications including consumer electronic applications, such as video surveillance, digital cameras, and televisions. The main goal of contrast enhancement is to increase the quality of images. However, most state-of-the-art methods induce different types of distortion such as intensity shift, wash-out, noise, intensity burn-out, and intensity saturation. In addition, in consumer electronics, simple and fast methods are required in order to be implemented in real time. A bihistogram equalization method based on adaptive sigmoid functions is proposed. It consists of splitting the image histogram into two parts that are equalized independently by using adaptive sigmoid functions. In order to preserve the mean brightness of the input image, the parameter of the sigmoid functions is chosen to minimize the absolute mean brightness metric. Experiments on the Berkeley database have shown that the proposed method improves the quality of images and preserves their mean brightness. An application to improve the colorfulness of images is also presented.

  2. Color calibration of a CMOS digital camera for mobile imaging

    NASA Astrophysics Data System (ADS)

    Eliasson, Henrik

    2010-01-01

    As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g., elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration, the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the same time, the demands for efficiency in the production environment require the calibration to be as simple as possible. Thus it is important to find the correct balance between image quality and production efficiency requirements. The purpose of this work is to investigate camera color variations using a simple model where the sensor and IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65 and F11. The sensor variations were determined from a very large population from which 6 corner samples were picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting images generated by the model were then analyzed in the CIELAB space and color errors were calculated using the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that the color temperature dependence is small enough to justify the use of only one light source in a production environment.

  3. Development of a novel 2D color map for interactive segmentation of histological images

    PubMed Central

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H.; Wang, May D.

    2016-01-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method’s results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  4. Microscale halftone color image analysis: perspective of spectral color prediction modeling

    NASA Astrophysics Data System (ADS)

    Rahaman, G. M. Atiqur; Norberg, Ole; Edström, Per

    2014-01-01

    A method has been proposed, whereby k-means clustering technique is applied to segment microscale single color halftone image into three components—solid ink, ink/paper mixed area and unprinted paper. The method has been evaluated using impact (offset) and non-impact (electro-photography) based single color prints halftoned by amplitude modulation (AM) and frequency modulation (FM) technique. The print samples have also included a range of variations in paper substrates. The colors of segmented regions have been analyzed in CIELAB color space to reveal the variations, in particular those present in mixed regions. The statistics of intensity distribution in the segmented areas have been utilized to derive expressions that can be used to calculate simple thresholds. However, the segmented results have been employed to study dot gain in comparison with traditional estimation technique using Murray-Davies formula. The performance of halftone reflectance prediction by spectral Murray-Davies model has been reported using estimated and measured parameters. Finally, a general idea has been proposed to expand the classical Murray-Davies model based on experimetal observations. Hence, the present study primarily presents the outcome of experimental efforts to characterize halftone print media interactions in respect to the color prediction models. Currently, most regression-based color prediction models rely on mathematical optimization to estimate the parameters using measured average reflectance of a large area compared to the dot size. While this general approach has been accepted as a useful tool, experimental investigations can enhance understanding of the physical processes and facilitate exploration of new modeling strategies. Furthermore, reported findings may help reduce the required number of samples that are printed and measured in the process of multichannel printer characterization and calibration.

  5. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  6. Availability of color calibration for consistent color display in medical images and optimization of reference brightness for clinical use

    NASA Astrophysics Data System (ADS)

    Iwai, Daiki; Suganami, Haruka; Hosoba, Minoru; Ohno, Kazuko; Emoto, Yutaka; Tabata, Yoshito; Matsui, Norihisa

    2013-03-01

    Color image consistency has not been accomplished yet except the Digital Imaging and Communication in Medicine (DICOM) Supplement 100 for implementing a color reproduction pipeline and device independent color spaces. Thus, most healthcare enterprises could not check monitor degradation routinely. To ensure color consistency in medical color imaging, monitor color calibration should be introduced. Using simple color calibration device . chromaticity of colors including typical color (Red, Green, Blue, Green and White) are measured as device independent profile connection space value called u'v' before and after calibration. In addition, clinical color images are displayed and visual differences are observed. In color calibration, monitor brightness level has to be set to quite lower value 80 cd/m2 according to sRGB standard. As Maximum brightness of most color monitors available currently for medical use have much higher brightness than 80 cd/m2, it is not seemed to be appropriate to use 80 cd/m2 level for calibration. Therefore, we propose that new brightness standard should be introduced while maintaining the color representation in clinical use. To evaluate effects of brightness to chromaticity experimentally, brightness level is changed in two monitors from 80 to 270cd/m2 and chromaticity value are compared with each brightness levels. As a result, there are no significant differences in chromaticity diagram when brightness levels are changed. In conclusion, chromaticity is close to theoretical value after color calibration. Moreover, chromaticity isn't moved when brightness is changed. The results indicate optimized reference brightness level for clinical use could be set at high brightness in current monitors .

  7. Using Kernel Principal Components for Color Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wesolkowski, Slawo

    2002-11-01

    Distinguishing objects on the basis of color is fundamental to humans. In this paper, a clustering approach is used to segment color images. Clustering is usually done using a single point or vector as a cluster prototype. The data can be clustered in the input or feature space where the feature space is some nonlinear transformation of the input space. The idea of kernel principal component analysis (KPCA) was introduced to align data along principal components in the kernel or feature space. KPCA is a nonlinear transformation of the input data that finds the eigenvectors along which this data has maximum information content (or variation). The principal components resulting from KPCA are nonlinear in the input space and represent principal curves. This is a necessary step as colors in RGB are not linearly correlated especially considering illumination effects such as shading or highlights. The performance of the k-means (Euclidean distance-based) and Mixture of Principal Components (vector angle-based) algorithms are analyzed in the context of the input space and the feature space obtained using KPCA. Results are presented on a color image segmentation task. The results are discussed and further extensions are suggested.

  8. Online monitoring of red meat color using hyperspectral imaging.

    PubMed

    Kamruzzaman, Mohammed; Makino, Yoshio; Oshita, Seiichi

    2016-06-01

    A hyperspectral imaging system in the spectral range of 400-1000 nm was tested to develop an online monitoring system for red meat (beef, lamb, and pork) color in the meat industry. Instead of selecting different sets of important wavelengths for beef, lamb, and pork, a set of feature wavelengths were selected using the successive projection algorithm for red meat colors (L*, a*, b) for convenient industrial application. Only six wavelengths (450, 460, 600, 620, 820, and 980 nm) were further chosen as predictive feature wavelengths for predicting L*, a*, and b* in red meat. Multiple linear regression models were then developed and predicted L*, a*, and b* with coefficients of determination (R(2)p) of 0.97, 0.84, and 0.82, and root mean square error of prediction of 1.72, 1.73, and 1.35, respectively. Finally, distribution maps of meat surface color were generated. The results indicated that hyperspectral imaging has the potential to be used for rapid assessment of meat color.

  9. Digital image fusion systems: color imaging and low-light targets

    NASA Astrophysics Data System (ADS)

    Estrera, Joseph P.

    2009-05-01

    This paper presents digital image fusion (enhanced A+B) systems in color imaging and low light target applications. This paper will discuss first the digital sensors that are utilized in the noted image fusion applications which is a 1900x1086 (high definition format) CMOS imager coupled to a Generation III image intensifier for the visible/near infrared (NIR) digital sensor and 320x240 or 640x480 uncooled microbolometer thermal imager for the long wavelength infrared (LWIR) digital sensor. Performance metrics for these digital imaging sensors will be presented. The digital image fusion (enhanced A+B) process will be presented in context of early fused night vision systems such as the digital image fused system (DIFS) and the digital enhanced night vision goggle and later, the long range digitally fused night vision sighting system. Next, this paper will discuss the effects of user display color in a dual color digital image fusion system. Dual color image fusion schemes such as Green/Red, Cyan/Yellow, and White/Blue for image intensifier and thermal infrared sensor color representation, respectively, are discussed. Finally, this paper will present digitally fused imagery and image analysis of long distance targets in low light from these digital fused systems. The result of this image analysis with enhanced A+B digital image fusion systems is that maximum contrast and spatial resolution is achieved in a digital fusion mode as compared to individual sensor modalities in low light, long distance imaging applications. Paper has been cleared by DoD/OSR for Public Release under Ref: 08-S-2183 on August 8, 2008.

  10. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities.

  11. False color image of Safsaf Oasis in southern Egypt

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color image of the uninhabited Safsaf Oasis in southern Egypt near the Egypt/Sudan border. It was produced from data obtained from the L-band and C-band radars that are part of the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar onboard the Shuttle Endeavour on April 9, 1994. The image is centered at 22 degrees North latitude, 29 degrees East longitude. It shows detailed structures of bedrock, and the dark blue sinuous lines are braided channels that occupy part of an old broad river valley. Virtually everything visible on this radar composite image cannot be seen either when standing on the ground or when viewing photographs or satellite images such as Landsat. The Jet Propulsion Laboratory alternative photo number is P-43920.

  12. Adaptive ladar receiver for multispectral imaging

    NASA Astrophysics Data System (ADS)

    Johnson, Kenneth; Vaidyanathan, Mohan; Xue, Song; Tennant, William E.; Kozlowski, Lester J.; Hughes, Gary W.; Smith, Duane D.

    2001-09-01

    We are developing a novel 2D focal plane array (FPA) with read-out integrated circuit (ROIC) on a single chip for 3D laser radar imaging. The ladar will provide high-resolution range and range-resolved intensity images for detection and identification of difficult targets. The initial full imaging-camera-on-a-chip system will be a 64 by 64 element, 100-micrometers pixel-size detector array that is directly bump bonded to a low-noise 64 by 64 array silicon CMOS-based ROIC. The architecture is scalable to 256 by 256 or higher arrays depending on the system application. The system will provide all the required electronic processing at pixel level and the smart FPA enables directly producing the 3D or 4D format data to be captured with a single laser pulse. The detector arrays are made of uncooled InGaAs PIN device for SWIR imaging at 1.5 micrometers wavelength and cooled HgCdTe PIN device for MWIR imaging at 3.8 micrometers wavelength. We are also investigating concepts using multi-color detector arrays for simultaneous imaging at multiple wavelengths that would provide additional spectral dimension capability for enhanced detection and identification of deep-hide targets. The system is suited for flash ladar imaging, for combat identification of ground targets from airborne platforms, flash-ladar imaging seekers, and autonomous robotic/automotive vehicle navigation and collision avoidance applications.

  13. The Application of Range Space Operations to Color Images

    SciTech Connect

    Baldwin, C; Duchaineau, M

    2002-03-26

    The knowledge gained from scientific observation, experiment, and simulation is linked to the ability to analyze, understand, and manage the generated results. These abilities are increasingly at odds with the current, and future, capabilities to generate enormous quantities of raw scientific and engineering data from instruments, sensors, and computers. Many researchers are currently engaged in activities that seek to create new and novel methods for analyzing, understanding, and managing these vast collections of data. In this work, we present some of our research in addressing a particular type of problem in this broad undertaking. Much the scientific data of interest is in the form of observed, measured, or computed multivariate or multi-component vector field data--with either as physical or color data values. We are currently researching methods and techniques for working with this type of vector data through the use of a novel analysis technique. Our basic approach is to work with the vector field data in its natural physical or color space. When the data is viewed as a functional mapping of a domain, usually an index space, to a range, the physical or color values, potentially interesting characteristics of the data present themselves. These characteristics are useful in analyzing the vector fields based on quantities and qualities of the physical or color data values themselves. We will present the basic development of the idea of range space operations and detail the information we are interested in and some of the issues involved in its computation. The data we are first interested in, and discuss exclusively in this work, is color image data from scientific observations and simulations. Some of the operations on the range space representation that are of interest to this color image data are colormap construction, segmentation, color modeling, and compression. We will show some how some of the operations can be implemented in range space, what analysis

  14. Adaptive discrete cosine transform based image coding

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Luoh, Shyan-Wen

    1996-04-01

    In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.

  15. Beef quality parameters estimation using ultrasound and color images

    PubMed Central

    2015-01-01

    Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452

  16. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  17. Stokes image reconstruction for two-color microgrid polarization imaging systems.

    PubMed

    Lemaster, Daniel A

    2011-07-18

    The Air Force Research Laboratory has developed a new microgrid polarization imaging system capable of simultaneously reconstructing linear Stokes parameter images in two colors on a single focal plane array. In this paper, an effective method for extracting Stokes images is presented for this type of camera system. It is also shown that correlations between the color bands can be exploited to significantly increase overall spatial resolution. Test data is used to show the advantages of this approach over bilinear interpolation. The bounds (in terms of available reconstruction bandwidth) on image resolution are also provided.

  18. Color Image of Phoenix Lander on Mars Surface

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is an enhanced-color image from Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment (HiRISE) camera. It shows the Phoenix lander with its solar panels deployed on the Mars surface. The spacecraft appears more blue than it would in reality.

    The blue/green and red filters on the HiRISE camera were used to make this picture.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Multi-color imaging of magnetic Co/Pt heterostructures

    PubMed Central

    Willems, Felix; von Korff Schmising, Clemens; Weder, David; Günther, Christian M.; Schneider, Michael; Pfau, Bastian; Meise, Sven; Guehrs, Erik; Geilhufe, Jan; Merhe, Alaa El Din; Jal, Emmanuelle; Vodungbo, Boris; Lüning, Jan; Mahieu, Benoit; Capotondi, Flavio; Pedersoli, Emanuele; Gauthier, David; Manfredda, Michele; Eisebitt, Stefan

    2017-01-01

    We present an element specific and spatially resolved view of magnetic domains in Co/Pt heterostructures in the extreme ultraviolet spectral range. Resonant small-angle scattering and coherent imaging with Fourier-transform holography reveal nanoscale magnetic domain networks via magnetic dichroism of Co at the M2,3 edges as well as via strong dichroic signals at the O2,3 and N6,7 edges of Pt. We demonstrate for the first time simultaneous, two-color coherent imaging at a free-electron laser facility paving the way for a direct real space access to ultrafast magnetization dynamics in complex multicomponent material systems. PMID:28289691

  20. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  1. Adaptive fuzzy segmentation of magnetic resonance images.

    PubMed

    Pham, D L; Prince, J L

    1999-09-01

    An algorithm is presented for the fuzzy segmentation of two-dimensional (2-D) and three-dimensional (3-D) multispectral magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities, also known as shading artifacts. The algorithm is an extension of the 2-D adaptive fuzzy C-means algorithm (2-D AFCM) presented in previous work by the authors. This algorithm models the intensity inhomogeneities as a gain field that causes image intensities to smoothly and slowly vary through the image space. It iteratively adapts to the intensity inhomogeneities and is completely automated. In this paper, we fully generalize 2-D AFCM to three-dimensional (3-D) multispectral images. Because of the potential size of 3-D image data, we also describe a new faster multigrid-based algorithm for its implementation. We show, using simulated MR data, that 3-D AFCM yields lower error rates than both the standard fuzzy C-means (FCM) algorithm and two other competing methods, when segmenting corrupted images. Its efficacy is further demonstrated using real 3-D scalar and multispectral MR brain images.

  2. Shear wave transmissivity measurement by color Doppler shear wave imaging

    NASA Astrophysics Data System (ADS)

    Yamakoshi, Yoshiki; Yamazaki, Mayuko; Kasahara, Toshihiro; Sunaguchi, Naoki; Yuminaka, Yasushi

    2016-07-01

    Shear wave elastography is a useful method for evaluating tissue stiffness. We have proposed a novel shear wave imaging method (color Doppler shear wave imaging: CD SWI), which utilizes a signal processing unit in ultrasound color flow imaging in order to detect the shear wave wavefront in real time. Shear wave velocity is adopted to characterize tissue stiffness; however, it is difficult to measure tissue stiffness with high spatial resolution because of the artifact produced by shear wave diffraction. Spatial average processing in the image reconstruction method also degrades the spatial resolution. In this paper, we propose a novel measurement method for the shear wave transmissivity of a tissue boundary. Shear wave wavefront maps are acquired by changing the displacement amplitude of the shear wave and the transmissivity of the shear wave, which gives the difference in shear wave velocity between two mediums separated by the boundary, is measured from the ratio of two threshold voltages required to form the shear wave wavefronts in the two mediums. From this method, a high-resolution shear wave amplitude imaging method that reconstructs a tissue boundary is proposed.

  3. Two-color excited-state absorption imaging of melanins

    NASA Astrophysics Data System (ADS)

    Fu, Dan; Ye, Tong; Matthews, Thomas E.; Yurtsever, Gunay; Hong, Lian; Simon, John D.; Warren, Warren S.

    2007-02-01

    We have demonstrated a new method for imaging melanin with two-color excited state absorption microscopy. If one of two synchronized mode-locked pulse trains at different colors is intensity modulated, the modulation transfers to the other pulse train when nonlinear absorption takes place in the medium. We can easily measure 10 -6 absorption changes caused by either instantaneous two-photon absorption or relatively long lived excited state absorption with a RF lock-in amplifier. Eumelanin and pheomelanin exhibit similar excited state dynamics. However, their difference in excited state absorption and ground state absorption leads to change in the phase of the transient absorption signal. Scanning microscopic imaging is performed with B16 cells, melanoma tissue to demonstrate the 3D high resolution imaging capability. Different melanosome samples are also imaged to illustrate the differences between eumelanin and pheomelanin signals. These differences could enable us to image their respective distribution in tissue samples and provide us with valuable information in diagnosing malignant transformation of melanocytes.

  4. Fabricating a better mouthguard. Part II: the effect of color on adaptation and fit.

    PubMed

    Del Rossi, Gianluca; Lisman, Peter; Signorile, Joseph

    2008-04-01

    The thermoforming process involves the heating of plastic sheets to a critical temperature followed by the shaping of the heated material into a three-dimensional structure. Given that custom-fabricated mouthguards are produced using the thermoforming process, the adaptation of plastic sheets to a stone model of the dentition is likely to be affected by the ability of the mouthguard material to be heated. The purpose of this study was to establish if material color affected the adaptation and fit of custom-made mouthguards. Twelve stone models were used in this investigation. Five mouthguards were produced using each model. These mouthguards were made using clear-, white-, black-, blue- and green-colored ethyl vinyl acetate. The force required to remove the various colored mouthguards from the corresponding stone models was determined using a strain gauge housed within a specially designed apparatus. Each of the mouthguards were tested three times at two different angles of pull -45 degrees and 90 degrees . Statistical tests performed using the average amount of force required for mouthguard removal revealed an angle by color interaction. Post hoc analyses revealed that the mean force required to remove the clear-colored mouthguards from their respective stone models was significantly less than the force required to pull away blue-, black- and green-colored mouthguards. This difference between clear- and dark-colored mouthguards was observed at both angles tested with the exception of the black mouthguard which differed from the clear-colored mouthguard only when removed at an angle of 90 degrees . The results of the present study indicate that by using dark-colored mouthguard material, one can achieve superior adaptation and thus produce a more firmly fitting mouthguard.

  5. Data Hiding Scheme on Medical Image using Graph Coloring

    NASA Astrophysics Data System (ADS)

    Astuti, Widi; Adiwijaya; Novia Wisety, Untari

    2015-06-01

    The utilization of digital medical images is now widely spread[4]. The medical images is supposed to get protection since it has probability to pass through unsecure network. Several watermarking techniques have been developed so that the digital medical images can be guaranteed in terms of its originality. In watermarking, the medical images becomes a protected object. Nevertheless, the medical images can actually be a medium of hiding secret data such as patient medical record. The data hiding is done by inserting data into image - usually called steganography in images. Because the medical images can influence the diagnose change, steganography will only be applied to non-interest region. Vector Quantization (VQ) is one of lossydata compression technique which is sufficiently prominent and frequently used. Generally, the VQ based steganography scheme still has limitation in terms of the data capacity which can be inserted. This research is aimed to make a Vector Quantization-based steganography scheme and graph coloring. The test result shows that the scheme can insert 28768 byte data which equals to 10077 characters for images area of 3696 pixels.

  6. Learning self-adaptive color harmony model for aesthetic quality classification

    NASA Astrophysics Data System (ADS)

    Kuang, Zhijie; Lu, Peng; Wang, Xiaojie; Lu, Xiaofeng

    2015-03-01

    Color harmony is one of the key aspects in aesthetic quality classification for photos. The existing color harmony models either are in lack of quantization schemes or can assess simple color patterns only. Therefore, these models cannot be applied to assess color harmony of photos directly. To address this problem, we proposed a simple data-based self-adaptive color harmony model. In this model, the hue distribution of a photo is fitted by mean shift based method, then features are extracted according to this distribution and finally the Gaussian mixture model is applied for learning features extracted from all the photos. The experimental results on eight categories datasets show that the proposed method outperforms the classic rule-based methods and the state-of-the-art data-based model.

  7. Color image enhancement using correlated intensity and saturation adjustments

    NASA Astrophysics Data System (ADS)

    Kwok, Ngaiming; Shi, Haiyan; Fang, Gu; Ha, Quang; Yu, Ying-Hao; Wu, Tonghai; Li, Huaizhong; Nguyen, Thai

    2015-07-01

    The enhancement of digital color images needs to be performed in accordance with human perception in terms of hue, saturation, and intensity attributes instead of improving only the contrast. Two approaches were developed in this work, which use a correlated adjustment mechanism incorporating intensity and saturation attributes and provide contrast and saturation enhancements together with brightness consistency. In these algorithms, object edges are emphasized for contrast, and image saturation is increased by boosting the salient regions. Furthermore, intensity and saturation enhancements are carried out in a lattice structure where adjustments are made inter-related for better performance. Experiments were conducted with benchmark and real-world images. Results had shown improvements in image qualities both qualitatively and quantitatively.

  8. False-Color-Image Map of Quadrangle 3164, Lashkargah (605) and Kandahar (606) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  9. False-Color-Image Map of Quadrangle 3568, Polekhomri (503) and Charikar (504) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  10. False-Color-Image Map of Quadrangle 3266, Ourzgan (519) and Moqur (520) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  11. False-Color-Image Map of Quadrangle 3162, Chakhansur (603) and Kotalak (604) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  12. False-Color-Image Map of Quadrangle 3564, Chahriaq (Joand) (405) and Gurziwan (406) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  13. False-Color-Image Map of Quadrangle 3464, Shahrak (411) and Kasi (412) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  14. False-Color-Image Map of Quadrangle 3366, Gizab (513) and Nawer (514) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  15. Color image segmentation using watershed and Nyström method based spectral clustering

    NASA Astrophysics Data System (ADS)

    Bai, Xiaodong; Cao, Zhiguo; Yu, Zhenghong; Zhu, Hu

    2011-11-01

    Color image segmentation draws a lot of attention recently. In order to improve efficiency of spectral clustering in color image segmentation, a novel two-stage color image segmentation method is proposed. In the first stage, we use vector gradient approach to detect color image gradient information, and watershed transformation to get the pre-segmentation result. In the second stage, Nyström extension based spectral clustering is used to get the final result. To verify the proposed algorithm, it is applied to color images from the Berkeley Segmentation Dataset. Experiments show our method can bring promising results and reduce the runtime significantly.

  16. A channel-based color fusion technique using multispectral images for night vision enhancement

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2011-09-01

    A fused image using multispectral images can increase the reliability of interpretation because it combines the complimentary information apparent in multispectral images. While a color image can be easily interpreted by human users (for visual analysis), and thus improves observer performance and reaction times. We propose a fast color fusion method, termed as channel-based color fusion, which is efficient for real time applications. Notice that the term of "color fusion" means combing multispectral images into a color-version image with the purpose of resembling natural scenes. On the other hand, false coloring technique usually has no intention of resembling natural scenery. The framework of channel-based color fusion is as follows, (1) prepare for color fusion by preprocessing, image registration and fusion; (2) form a color fusion image by properly assigning multispectral images to red, green, and blue channels; (3) fuse multispectral images (gray fusion) using a wavelet-based fusion algorithm; and (4) replace the value component of color fusion in HSV color space with the gray-fusion image, and finally transform back to RGB space. In night vision imaging, there may be two or several bands of images available, for example, visible (RGB), image intensified (II), near infrared (NIR), medium wave infrared (MWIR), long wave infrared (LWIR). The proposed channel-wise color fusions were tested with two-band (e.g., NIR + LWIR, II + LWIR, RGB + LWIR) or three-band (e.g., RGB + NIR + LWIR) multispectral images. Experimental results show that the colors in the fused images by the proposed method are vivid and comparable with that of the segmentation-based colorization. The processing speed of new method is much faster than any segmentation-based method.

  17. Ecological genetics of adaptive color polymorphism in pocket mice: geographic variation in selected and neutral genes.

    PubMed

    Hoekstra, Hopi E; Drumm, Kristen E; Nachman, Michael W

    2004-06-01

    Patterns of geographic variation in phenotype or genotype may provide evidence for natural selection. Here, we compare phenotypic variation in color, allele frequencies of a pigmentation gene (the melanocortin-1 receptor, Mc1r), and patterns of neutral mitochondrial DNA (mtDNA) variation in rock pocket mice (Chaetodipus intermedius) across a habitat gradient in southern Arizona. Pocket mice inhabiting volcanic lava have dark coats with unbanded, uniformly melanic hairs, whereas mice from nearby light-colored granitic rocks have light coats with banded hairs. This color polymorphism is a presumed adaptation to avoid predation. Previous work has demonstrated that two Mc1r alleles, D and d, differ by four amino acids, and are responsible for the color polymorphism: DD and Dd genotypes are melanic whereas dd genotypes are light colored. To determine the frequency of the two Mc1r allelic classes across the dark-colored lava and neighboring light-colored granite, we sequenced the Mc1r gene in 175 individuals from a 35-km transect in the Pinacate lava region. We also sequenced two neutral mtDNA genes, COIII and ND3, in the same individuals. We found a strong correlation between Mc1r allele frequency and habitat color and no correlation between mtDNA markers and habitat color. Using estimates of migration from mtDNA haplotypes between dark- and light-colored sampling sites and Mc1r allele frequencies at each site, we estimated selection coefficients against mismatched Mc1r alleles, assuming a simple model of migration-selection balance. Habitat-dependent selection appears strong but asymmetric: selection is stronger against light mice on dark rock than against melanic mice on light rock. Together these results suggest that natural selection acts to match pocket mouse coat color to substrate color, despite high levels of gene flow between light and melanic populations.

  18. The evolution of wing color: male mate choice opposes adaptive wing color divergence in Colias butterflies.

    PubMed

    Ellers, Jacintha; Boggs, Carol L

    2003-05-01

    Correlated evolution of mate signals and mate preference may be constrained if selection pressures acting on mate preference differ from those acting on mate signals. In particular, opposing selection pressures may act on mate preference and signals when traits have sexual as well as nonsexual functions. In the butterfly Colias philodice eriphyle, divergent selection on wing color across an elevational gradient in response to the thermal environment has led to increasing wing melanization at higher elevations. Wing color is also a long-range signal used by males in mate searching. We conducted experiments to test whether sexual selection on wing melanization via male mate choice acts in the same direction as natural selection on mate signals due to the thermal environment. We performed controlled mate choice experiments in the field over an elevational range of 1500 meters using decoy butterflies with different melanization levels. Also, we obtained a more direct estimate of the relation between wing color and sexual selection by measuring mating success in wild-caught females. Both our experiments showed that wing melanization is an important determinant of female mating success in C. p. eriphyle. However, a lack of elevational variation in male mate preference prevents coevolution of mate signals and mate preference, as males at all elevations prefer less-melanized females. We suggest that this apparently maladaptive mate choice may be maintained by differences in detectability between the morphs or by preservation of species recognition.

  19. Infrared and color visible image fusion system based on luminance-contrast transfer technique

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Gong, Wenfeng; Wang, Chensheng

    2012-12-01

    In this paper, an infrared and color image fusion algorithm based on luminance-contrast transfer technique is presented. This algorithm shall operate YCbCr transform on color visible image, and obtain the luminance component. Then, the grey-scale image fusion methods are utilized to fuse the luminance component of visible and infrared images to acquire grey-scale fusion image. After that, the grey-scale fusion image and visible image are fused to form color fusion image based on inversed YCbCr transform. To acquire better details appearance, a natural-sense color transfer fusion algorithm based on reference image is proposed. Furthermore, a real-time infrared/visible image fusion system based on FPGA is realized. Finally, this design and achievement is verified experimentally, and the experimental results show that the system can produce a color fusion image with good image quality and real-time performance.

  20. Three frequency false-color image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency, false color image of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. It was produced using data from the X-band, C-band and L-band radars that comprise the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). SIR-C/X-SAR acquired this image on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. Most of the dark blue areas in the image are the ice covered lakes. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are shown by light

  1. False-color composite image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false color composite of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. This image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. The look angle of the radar is 30 degrees and the size of the image is approximately 20 kilometers by 50 kilometers (12 by 30 miles). Most of the dark areas in the image are the ice-covered lakes in the region. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of Highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake

  2. Blood flow estimation in gastroscopic true-color images

    NASA Astrophysics Data System (ADS)

    Jacoby, Raffael S.; Herpers, Rainer; Zwiebel, Franz M.; Englmeier, Karl-Hans

    1995-05-01

    The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.

  3. Generating color terrain images in an emergency response system

    SciTech Connect

    Belles, R.D.

    1985-08-01

    The Atmospheric Release Advisory Capability (ARAC) provides real-time assessments of the consequences resulting from an atmospheric release of radioactive material. In support of this operation, a system has been created which integrates numerical models, data acquisition systems, data analysis techniques, and professional staff. Of particular importance is the rapid generation of graphical images of the terrain surface in the vicinity of the accident site. A terrain data base and an associated acquisition system have been developed that provide the required terrain data. This data is then used as input to a collection of graphics programs which create and display realistic color images of the terrain. The graphics system currently has the capability of generating color shaded relief images from both overhead and perspective viewpoints within minutes. These images serve to quickly familiarize ARAC assessors with the terrain near the release location, and thus permit them to make better informed decisions in modeling the behavior of the released material. 7 refs., 8 figs.

  4. Butterfly wing coloration studied with a novel imaging scatterometer

    NASA Astrophysics Data System (ADS)

    Stavenga, Doekele

    2010-03-01

    Animal coloration functions for display or camouflage. Notably insects provide numerous examples of a rich variety of the applied optical mechanisms. For instance, many butterflies feature a distinct dichromatism, that is, the wing coloration of the male and the female differ substantially. The male Brimstone, Gonepteryx rhamni, has yellow wings that are strongly UV iridescent, but the female has white wings with low reflectance in the UV and a high reflectance in the visible wavelength range. In the Small White cabbage butterfly, Pieris rapae crucivora, the wing reflectance of the male is low in the UV and high at visible wavelengths, whereas the wing reflectance of the female is higher in the UV and lower in the visible. Pierid butterflies apply nanosized, strongly scattering beads to achieve their bright coloration. The male Pipevine Swallowtail butterfly, Battus philenor, has dorsal wings with scales functioning as thin film gratings that exhibit polarized iridescence; the dorsal wings of the female are matte black. The polarized iridescence probably functions in intraspecific, sexual signaling, as has been demonstrated in Heliconius butterflies. An example of camouflage is the Green Hairstreak butterfly, Callophrys rubi, where photonic crystal domains exist in the ventral wing scales, resulting in a matte green color that well matches the color of plant leaves. The spectral reflection and polarization characteristics of biological tissues can be rapidly and with unprecedented detail assessed with a novel imaging scatterometer-spectrophotometer, built around an elliptical mirror [1]. Examples of butterfly and damselfly wings, bird feathers, and beetle cuticle will be presented. [4pt] [1] D.G. Stavenga, H.L. Leertouwer, P. Pirih, M.F. Wehling, Optics Express 17, 193-202 (2009)

  5. Multi-scale Adaptive Computational Ghost Imaging

    PubMed Central

    Sun, Shuai; Liu, Wei-Tao; Lin, Hui-Zu; Zhang, Er-Feng; Liu, Ji-Ying; Li, Quan; Chen, Ping-Xing

    2016-01-01

    In some cases of imaging, wide spatial range and high spatial resolution are both required, which requests high performance of detection devices and huge resource consumption for data processing. We propose and demonstrate a multi-scale adaptive imaging method based on the idea of computational ghost imaging, which can obtain a rough outline of the whole scene with a wide range then accordingly find out the interested parts and achieve high-resolution details of those parts, by controlling the field of view and the transverse coherence width of the pseudo-thermal field illuminated on the scene with a spatial light modulator. Compared to typical ghost imaging, the resource consumption can be dramatically reduced using our scheme. PMID:27841339

  6. Multi-scale Adaptive Computational Ghost Imaging

    NASA Astrophysics Data System (ADS)

    Sun, Shuai; Liu, Wei-Tao; Lin, Hui-Zu; Zhang, Er-Feng; Liu, Ji-Ying; Li, Quan; Chen, Ping-Xing

    2016-11-01

    In some cases of imaging, wide spatial range and high spatial resolution are both required, which requests high performance of detection devices and huge resource consumption for data processing. We propose and demonstrate a multi-scale adaptive imaging method based on the idea of computational ghost imaging, which can obtain a rough outline of the whole scene with a wide range then accordingly find out the interested parts and achieve high-resolution details of those parts, by controlling the field of view and the transverse coherence width of the pseudo-thermal field illuminated on the scene with a spatial light modulator. Compared to typical ghost imaging, the resource consumption can be dramatically reduced using our scheme.

  7. Efficient adaptive thresholding with image masks

    NASA Astrophysics Data System (ADS)

    Oh, Young-Taek; Hwang, Youngkyoo; Kim, Jung-Bae; Bang, Won-Chul

    2014-03-01

    Adaptive thresholding is a useful technique for document analysis. In medical image processing, it is also helpful for segmenting structures, such as diaphragms or blood vessels. This technique sets a threshold using local information around a pixel, then binarizes the pixel according to the value. Although this technique is robust to changes in illumination, it takes a significant amount of time to compute thresholds because it requires adding all of the neighboring pixels. Integral images can alleviate this overhead; however, medical images, such as ultrasound, often come with image masks, and ordinary algorithms often cause artifacts. The main problem is that the shape of the summing area is not rectangular near the boundaries of the image mask. For example, the threshold at the boundary of the mask is incorrect because pixels on the mask image are also counted. Our key idea to cope with this problem is computing the integral image for the image mask to count the valid number of pixels. Our method is implemented on a GPU using CUDA, and experimental results show that our algorithm is 164 times faster than a naïve CPU algorithm for averaging.

  8. Adaptive Optics Imaging in Laser Pointer Maculopathy.

    PubMed

    Sheyman, Alan T; Nesper, Peter L; Fawzi, Amani A; Jampol, Lee M

    2016-08-01

    The authors report multimodal imaging including adaptive optics scanning laser ophthalmoscopy (AOSLO) (Apaeros retinal image system AOSLO prototype; Boston Micromachines Corporation, Boston, MA) in a case of previously diagnosed unilateral acute idiopathic maculopathy (UAIM) that demonstrated features of laser pointer maculopathy. The authors also show the adaptive optics images of a laser pointer maculopathy case previously reported. A 15-year-old girl was referred for the evaluation of a maculopathy suspected to be UAIM. The authors reviewed the patient's history and obtained fluorescein angiography, autofluorescence, optical coherence tomography, infrared reflectance, and AOSLO. The time course of disease and clinical examination did not fit with UAIM, but the linear pattern of lesions was suspicious for self-inflicted laser pointer injury. This was confirmed on subsequent questioning of the patient. The presence of linear lesions in the macula that are best highlighted with multimodal imaging techniques should alert the physician to the possibility of laser pointer injury. AOSLO further characterizes photoreceptor damage in this condition. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:782-785.].

  9. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    SciTech Connect

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  10. Characterizing pigments with hyperspectral imaging variable false-color composites

    NASA Astrophysics Data System (ADS)

    Hayem-Ghez, Anita; Ravaud, Elisabeth; Boust, Clotilde; Bastian, Gilles; Menu, Michel; Brodie-Linder, Nancy

    2015-11-01

    Hyperspectral imaging has been used for pigment characterization on paintings for the last 10 years. It is a noninvasive technique, which mixes the power of spectrophotometry and that of imaging technologies. We have access to a visible and near-infrared hyperspectral camera, ranging from 400 to 1000 nm in 80-160 spectral bands. In order to treat the large amount of data that this imaging technique generates, one can use statistical tools such as principal component analysis (PCA). To conduct the characterization of pigments, researchers mostly use PCA, convex geometry algorithms and the comparison of resulting clusters to database spectra with a specific tolerance (like the Spectral Angle Mapper tool on the dedicated software ENVI). Our approach originates from false-color photography and aims at providing a simple tool to identify pigments thanks to imaging spectroscopy. It can be considered as a quick first analysis to see the principal pigments of a painting, before using a more complete multivariate statistical tool. We study pigment spectra, for each kind of hue (blue, green, red and yellow) to identify the wavelength maximizing spectral differences. The case of red pigments is most interesting because our methodology can discriminate the red pigments very well—even red lakes, which are always difficult to identify. As for the yellow and blue categories, it represents a good progress of IRFC photography for pigment discrimination. We apply our methodology to study the pigments on a painting by Eustache Le Sueur, a French painter of the seventeenth century. We compare the results to other noninvasive analysis like X-ray fluorescence and optical microscopy. Finally, we draw conclusions about the advantages and limits of the variable false-color image method using hyperspectral imaging.

  11. Performance Evaluation of Color Models in the Fusion of Functional and Anatomical Images.

    PubMed

    Ganasala, Padma; Kumar, Vinod; Prasad, A D

    2016-05-01

    Fusion of the functional image with an anatomical image provides additional diagnostic information. It is widely used in diagnosis, treatment planning, and follow-up of oncology. Functional image is a low-resolution pseudo color image representing the uptake of radioactive tracer that gives the important metabolic information. Whereas, anatomical image is a high-resolution gray scale image that gives structural details. Fused image should consist of all the anatomical details without any changes in the functional content. This is achieved through fusion in de-correlated color model and the choice of color model has greater impact on the fusion outcome. In the present work, suitability of different color models for functional and anatomical image fusion is studied. After converting the functional image into de-correlated color model, the achromatic component of functional image is fused with an anatomical image by using proposed nonsubsampled shearlet transform (NSST) based image fusion algorithm to get new achromatic component with all the anatomical details. This new achromatic and original chromatic channels of functional image are converted to RGB format to get fused functional and anatomical image. Fusion is performed in different color models. Different cases of SPECT-MRI images are used for this color model study. Based on visual and quantitative analysis of fused images, the best color model for the stated purpose is determined.

  12. Adaptive image segmentation applied to plant reproduction by tissue culture

    NASA Astrophysics Data System (ADS)

    Vazquez Rueda, Martin G.; Hahn, Federico; Zapata, Jose L.

    1997-04-01

    This paper presents that experimental results obtained on indoor tissue culture using the adaptive image segmentation system. The performance of the adaptive technique is contrasted with different non-adaptive techniques commonly used in the computer vision field to demonstrate the improvement provided by the adaptive image segmentation system.

  13. Structure of mouse spleen investigated by 7-color fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Tsurui, Hiromichi; Niwa, Shinichirou; Hirose, Sachiko; Okumura, Ko; Shirai, Toshikazu

    2001-07-01

    Multi-color fluorescence imaging of tissue samples has been an urgent requirement in current biology. As far as fluorescence signals should be isolated by optical bandpass filter-sets, rareness of the combination of chromophores with little spectral overlap has hampered to satisfy this demand. Additivity of signals in a fluorescence image accepts applying linear unmixing of superposed spectra based on singular value decomposition, hence complete separation of the fluorescence signals fairly overlapping each other. We have developed 7-color fluorescence imaging based on this principle and applied the method to the investigation of mouse spleen. Not only rough structural features in a spleen such as red pulp, marginal zone, and white pulp, but also fine structures of them, periarteriolar lymphocyte sheath (PALS), follicle, and germinal center were clearly pictured simultaneously. The distributions of subsets of dendritic cells (DC) and macrophages (M(phi) ) markers such as BM8, F4/80, MOMA2 and Mac3 around the marginal zone were imagined simultaneously. Their inhomogeneous expressions were clearly demonstrated. These results show the usefulness of the method in the study of the structure that consists of many kinds of cells and in the identification of cells characterized by multiple markers.

  14. Segmentation of color images based on the gravitational clustering concept

    NASA Astrophysics Data System (ADS)

    Lai, Andrew H.; Yung, H. C.

    1998-03-01

    A new clustering algorithm derived from the Markovian model of the gravitational clustering concept is proposed that works in the RGB measurement space for color image. To enable the model to be applicable in image segmentation, the new algorithm imposes a clustering constraint at each clustering iteration to control and determine the formation of multiple clusters. Using such constraint to limit the attraction between clusters, a termination condition can be easily defined. The new clustering algorithm is evaluated objectively and subjectively on three different images against the K-means clustering algorithm, the recursive histogram clustering algorithm for color, the Hedley-Yan algorithm, and the widely used seed-based region growing algorithm. From the evaluation, it is observed that the new algorithm exhibits the following characteristics: (1) its objective measurement figures are comparable with the best in this group of segmentation algorithms; (2) it generates smoother region boundaries; (3) the segmented boundaries align closely with the original boundaries; and (4) it forms a meaningful number of segmented regions.

  15. Five-color fluorescent imaging in living tumor cells

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Yang, Jie; Chu, Jun; Luo, Qingming; Zhang, Zhihong

    2008-12-01

    The fluorescent probes based on fluorescent proteins (FP) have been widely used to investigate the molecules of interest in living cells. It is well-known that the molecular events in the living cells are very complicate and all of the cell activities are involved by multi-molecular interaction. With the development of novel fluorescent protein mutants and imaging technology, the molecular signal in living cells could be detected accurately. In this study, with the appropriate targeting signals, the fluorescent proteins were localized to plasma membrane (Rac1-mCerulean), Golgi membrane (EYFP-go), ER membrane (RFP2-er), mitochondrial membrane (RFP1-mt). Cultured Hela cells were cotransfected with these four plasmids, and 36 h later, labeled with Hoechst33258 which located in the nucleus of a living cell. Using a confocal microscopy, with 405 nm, 458 nm and 514 nm laser lines employed respectively, a five-color fluorescent image was obtained in which five subcellular structures were clearly shown in living cells. The technique of multi-color imaging in a single cell provides a powerful tool to simultaneously study the multi-molecular events in living cells.

  16. SRTM Radar Image with Color as Height: Kachchh, Gujarat, India

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image shows the area around the January 26, 2001, earthquake in western India, the deadliest in the country's history with some 20,000 fatalities. The epicenter of the magnitude 7.6 earthquake was just to the left of the center of the image. The Gulf of Kachchh (or Kutch) is the black area running from the lower left corner towards the center of the image. The city of Bhuj is in the yellow-toned area among the brown hills left of the image center and is the historical capital of the Kachchh region. Bhuj and many other towns and cities nearby were almost completely destroyed by the shaking of the earthquake. These hills reach up to 500 meters (1,500 feet) elevation. The city of Ahmedabad, capital of Gujarat state, is the radar-bright area next to the right side of the image. Several buildings in Ahmedabad were also destroyed by the earthquake. The dark blue areas around the center of the image and extending to the left side are low-lying salt flats called the Rann of Kachchh with the Little Rann just to the right of the image center. The bumpy area north of the Rann (green and yellow colors) is a large area of sand dunes in Pakistan. A branch of the Indus River used to flow through the area on the left side of this image, but it was diverted by a previous large earthquake that struck this area in 1819.

    The annotated version of the image includes a 'beachball' that shows the location and slip direction of the January 26, 2001, earthquake from the Harvard Quick CMT catalog: http://www.seismology.harvard.edu/CMTsearch.html. [figure removed for brevity, see original site]

    This image combines two types of data from the Shuttle Radar Topography Mission (SRTM). The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to brown and white at the highest elevations. This image is a mosaic of four SRTM swaths.

    This image

  17. High-performance VGA-resolution digital color CMOS imager

    NASA Astrophysics Data System (ADS)

    Agwani, Suhail; Domer, Steve; Rubacha, Ray; Stanley, Scott

    1999-04-01

    This paper discusses the performance of a new VGA resolution color CMOS imager developed by Motorola on a 0.5micrometers /3.3V CMOS process. This fully integrated, high performance imager has on chip timing, control, and analog signal processing chain for digital imaging applications. The picture elements are based on 7.8micrometers active CMOS pixels that use pinned photodiodes for higher quantum efficiency and low noise performance. The image processing engine includes a bank of programmable gain amplifiers, line rate clamping for dark offset removal, real time auto white balancing, per column gain and offset calibration, and a 10 bit pipelined RSD analog to digital converter with a programmable input range. Post ADC signal processing includes features such as bad pixel replacement based on user defined thresholds levels, 10 to 8 bit companding and 5 tap FIR filtering. The sensor can be programmed via a standard I2C interface that runs on 3.3V clocks. Programmable features include variable frame rates using a constant frequency master clock, electronic exposure control, continuous or single frame capture, progressive or interlace scanning modes. Each pixel is individually addressable allowing region of interest imaging and image subsampling. The sensor operates with master clock frequencies of up to 13.5MHz resulting in 30FPS. A total programmable gain of 27dB is available. The sensor power dissipation is 400mW at full speed of operation. The low noise design yields a measured 'system on a chip' dynamic range of 50dB thus giving over 8 true bits of resolution. Extremely high conversion gain result in an excellent peak sensitivity of 22V/(mu) J/cm2 or 3.3V/lux-sec. This monolithic image capture and processing engine represent a compete imaging solution making it a true 'camera on a chip'. Yet in its operation it remains extremely easy to use requiring only one clock and a 3.3V power supply. Given the available features and performance levels, this sensor will be

  18. Los Angeles, California, Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationships of the dense urban development of Los Angeles and the natural contours of the land. The image includes the Pacific Ocean on the left, the flat Los Angeles Basin across the center, and the steep ranges of the Santa Monica and Verdugo mountains along the top. The two dark strips near the coast at lower left are the runways of Los Angeles International Airport. Downtown Los Angeles is the bright yellow and pink area at lower center. Pasadena, including the Rose Bowl, are seen half way down the right edge of the image. The communities of Glendale and Burbank, including the Burbank Airport, are seen at the center of the top edge of the image. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities such as Leangles.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between

  19. Biological versus Electronic Adaptive Coloration: How Can One Inform the Other?

    DTIC Science & Technology

    2012-01-01

    which typically require batteries. However, a few displays have had solar - cells integrated into them to harvest energy [60,61]. Speed. Ideally, speed...the mechanisms of their adaptive coloration. This review includes detailed subsections on the chro- matophore organs, iridophores and leucophore cells ...mitochondria muscle cellnerve axon glial cell chromatophore structure nucleus cytoelastic sacculus capsules with charged pigment black state

  20. Three frequency false color image of Flevoland, the Netherlands

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency false color image of Flevoland, the Netherlands, centered at 52.4 degrees north latitude, 5.4 degrees east longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Shuttle Endeavour. The area shown covers an area approximately 25 kilometers by 28 kilometers. Flevoland, which fills the lower two-thirds of the image, is a very flat area that is made up of reclaimed land that is used for agriculture and forestry. At the top of the image, across the canal from Flevoland, is an older forest shown in red; the city of Harderwijk is shown in white on the shore of the canal. At this time of the year, the agricultural fields are bare soil, and they show up in this image in blue. The dark blue areas are water and the small dots in the canal are boats. The Jet Propulsion Laboratory alternative photo number is P-43941.

  1. Radar Image with Color as Height, Ancharn Kuy, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Ancharn Kuy, Cambodia, was taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR). The image depicts an area northwest of Angkor Wat. The radar has highlighted a number of circular village mounds in this region, many of which have a circular pattern of rice fields surrounding the slightly elevated site. Most of them have evidence of what seems to be pre-Angkor occupation, such as stone tools and potsherds. Most of them also have a group of five spirit posts, a pattern not found in other parts of Cambodia. The shape of the mound, the location in the midst of a ring of rice fields, the stone tools and the current practice of spirit veneration have revealed themselves through a unique 'marriage' of radar imaging, archaeological investigation, and anthropology.

    Ancharn Kuy is a small village adjacent to the road, with just this combination of features. The region gets slowly higher in elevation, something seen in the shift of color from yellow to blue as you move to the top of the image.

    The small dark rectangles are typical of the smaller water control devices employed in this area. While many of these in the center of Angkor are linked to temples of the 9th to 14th Century A.D., we cannot be sure of the construction date of these small village tanks. They may pre-date the temple complex, or they may have just been dug ten years ago!

    The image dimensions are approximately 4.75 by 4.3 kilometers (3 by 2.7 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color; that is going from blue to red to yellow to green and back to blue again; corresponds to 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif

  2. Adaptive marginal median filter for colour images.

    PubMed

    Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor

    2011-01-01

    This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.

  3. Multi-dimensional color image storage and retrieval for a normal arbitrary quantum superposition state

    NASA Astrophysics Data System (ADS)

    Li, Hai-Sheng; Zhu, Qingxin; Zhou, Ri-Gui; Song, Lan; Yang, Xing-jiang

    2014-04-01

    Multi-dimensional color image processing has two difficulties: One is that a large number of bits are needed to store multi-dimensional color images, such as, a three-dimensional color image of needs bits. The other one is that the efficiency or accuracy of image segmentation is not high enough for some images to be used in content-based image search. In order to solve the above problems, this paper proposes a new representation for multi-dimensional color image, called a -qubit normal arbitrary quantum superposition state (NAQSS), where qubits represent colors and coordinates of pixels (e.g., represent a three-dimensional color image of only using 30 qubits), and the remaining 1 qubit represents an image segmentation information to improve the accuracy of image segmentation. And then we design a general quantum circuit to create the NAQSS state in order to store a multi-dimensional color image in a quantum system and propose a quantum circuit simplification algorithm to reduce the number of the quantum gates of the general quantum circuit. Finally, different strategies to retrieve a whole image or the target sub-image of an image from a quantum system are studied, including Monte Carlo sampling and improved Grover's algorithm which can search out a coordinate of a target sub-image only running in where and are the numbers of pixels of an image and a target sub-image, respectively.

  4. Optical color-image encryption in the diffractive-imaging scheme

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Pan, Qunna; Gong, Qiong

    2016-02-01

    By introducing the theta modulation technique into the diffractive-imaging-based optical scheme, we propose a novel approach for color image encryption. For encryption, a color image is divided into three channels, i.e., red, green and blue, and thereafter these components are appended by redundant data before being sent to the encryption scheme. The carefully designed optical setup, which comprises of three 4f optical architectures and a diffractive-imaging-based optical scheme, could encode the three plaintexts into a single noise-like intensity pattern. For the decryption, an iterative phase retrieval algorithm, together with a filter operation, is applied to extract the primary color images from the diffraction intensity map. Compared with previous methods, our proposal has successfully encrypted a color rather than grayscale image into a single intensity pattern, as a result of which the capacity and practicability have been remarkably enhanced. In addition, the performance and the security of it are also investigated. The validity as well as feasibility of the proposed method is supported by numerical simulations.

  5. Adaptive dispersion compensation for guided wave imaging

    NASA Astrophysics Data System (ADS)

    Hall, James S.; Michaels, Jennifer E.

    2012-05-01

    Ultrasonic guided waves offer the promise of fast and reliable methods for interrogating large, plate-like structures. Distributed arrays of permanently attached, inexpensive piezoelectric transducers have thus been proposed as a cost-effective means to excite and measure ultrasonic guided waves for structural health monitoring (SHM) applications. Guided wave data recorded from a distributed array of transducers are often analyzed and interpreted through the use of guided wave imaging algorithms, such as conventional delay-and-sum imaging or the more recently applied minimum variance imaging. Both imaging algorithms perform reasonably well using signal envelopes, but can exhibit significant performance improvements when phase information is used. However, the use of phase information inherently requires knowledge of the dispersion relations, which are often not known to a sufficient degree of accuracy for high quality imaging since they are very sensitive to environmental conditions such as temperature, pressure, and loading. This work seeks to perform improved imaging with phase information by leveraging adaptive dispersion estimates obtained from in situ measurements. Experimentally obtained data from a distributed array is used to validate the proposed approach.

  6. Enhancement dark channel algorithm of color fog image based on the local segmentation

    NASA Astrophysics Data System (ADS)

    Yun, Lijun; Gao, Yin; Shi, Jun-sheng; Xu, Ling-zhang

    2015-04-01

    The classical dark channel theory algorithm has yielded good results in the processing of single fog image, but in some larger contrast regions, it appears image hue, brightness and saturation distortion problems to a certain degree, and also produces halo phenomenon. In the view of the above situation, through a lot of experiments, this paper has found some factors causing the halo phenomenon. The enhancement dark channel algorithm of color fog image based on the local segmentation is proposed. On the basis of the dark channel theory, first of all, the classic dark channel theory of mathematical model is modified, which is mainly to correct the brightness and saturation of image. Then, according to the local adaptive segmentation theory, it process the block of image, and overlap the local image. On the basis of the statistical rules, it obtains each pixel value from the segmentation processing, so as to obtain the local image. At last, using the dark channel theory, it achieves the enhanced fog image. Through the subjective observation and objective evaluation, the algorithm is better than the classic dark channel algorithm in the overall and details.

  7. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  8. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  9. Honolulu, Hawaii Radar Image, Wrapped Color as Height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the city of Honolulu, Hawaii and adjacent areas on the island of Oahu. Honolulu lies on the south shore of the island, right of center of the image. Just below the center is Pearl Harbor, marked by several inlets and bays. Runways of the airport can be seen to the right of Pearl Harbor. Diamond Head, an extinct volcanic crater, is a blue circle along the coast right of center. The Koolau mountain range runs through the center of the image. The steep cliffs on the north side of the range are thought to be remnants of massive landslides that ripped apart the volcanic mountains that built the island thousands of years ago. On the north shore of the island are the Mokapu Peninsula and Kaneohe Bay. High resolution topographic data allow ecologists and planners to assess the effects of urban development on the sensitive ecosystems in tropical regions.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Each cycle of colors (from pink through blue back to pink) represents an equal amount of elevation difference (400 meters, or 1300 feet) similar to contour lines on a standard topographic map. This image contains about 2400 meters (8000 feet) of total relief.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA

  10. San Gabriel Mountains, California, Radar image, color as height

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This topographic radar image shows the relationship of the urban area of Pasadena, California to the natural contours of the land. The image includes the alluvial plain on which Pasadena and the Jet Propulsion Laboratory sit, and the steep range of the San Gabriel Mountains. The mountain front and the arcuate valley running from upper left to the lower right are active fault zones, along which the mountains are rising. The chaparral-covered slopes above Pasadena are also a prime area for wildfires and mudslides. Hazards from earthquakes, floods and fires are intimately related to the topography in this area. Topographic data and other remote sensing images provide valuable information for assessing and mitigating the natural hazards for cities along the front of active mountain ranges.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. Colors range from blue at the lowest elevations to white at the highest elevations. This image contains about 2300 meters (7500 feet) of total relief. White speckles on the face of some of the mountains are holes in the data caused by steep terrain. These will be filled using coverage from an intersecting pass.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11,2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency

  11. Radar image with color as height, Bahia State, Brazil

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This radar image is the first to show the full 240-kilometer-wide (150 mile)swath collected by the Shuttle Radar Topography Mission (SRTM). The area shown is in the state of Bahia in Brazil. The semi-circular mountains along the leftside of the image are the Serra Da Jacobin, which rise to 1100 meters (3600 feet) above sea level. The total relief shown is approximately 800 meters (2600 feet). The top part of the image is the Sertao, a semi-arid region, that is subject to severe droughts during El Nino events. A small portion of the San Francisco River, the longest river (1609 kilometers or 1000 miles) entirely within Brazil, cuts across the upper right corner of the image. This river is a major source of water for irrigation and hydroelectric power. Mapping such regions will allow scientists to better understand the relationships between flooding cycles, drought and human influences on ecosystems.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. The three dark vertical stripes show the boundaries where four segments of the swath are merged to form the full scanned swath. These will be removed in later processing. Colors range from green at the lowest elevations to reddish at the highest elevations.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space

  12. Color image encryption based on gyrator transform and Arnold transform

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Gao, Bo

    2013-06-01

    A color image encryption scheme using gyrator transform and Arnold transform is proposed, which has two security levels. In the first level, the color image is separated into three components: red, green and blue, which are normalized and scrambled using the Arnold transform. The green component is combined with the first random phase mask and transformed to an interim using the gyrator transform. The first random phase mask is generated with the sum of the blue component and a logistic map. Similarly, the red component is combined with the second random phase mask and transformed to three-channel-related data. The second random phase mask is generated with the sum of the phase of the interim and an asymmetrical tent map. In the second level, the three-channel-related data are scrambled again and combined with the third random phase mask generated with the sum of the previous chaotic maps, and then encrypted into a gray scale ciphertext. The encryption result has stationary white noise distribution and camouflage property to some extent. In the process of encryption and decryption, the rotation angle of gyrator transform, the iterative numbers of Arnold transform, the parameters of the chaotic map and generated accompanied phase function serve as encryption keys, and hence enhance the security of the system. Simulation results and security analysis are presented to confirm the security, validity and feasibility of the proposed scheme.

  13. Dual-Color 3D Superresolution Microscopy by Combined Spectral-Demixing and Biplane Imaging

    PubMed Central

    Winterflood, Christian M.; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-01-01

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. PMID:26153696

  14. Adaptive Optics Imaging of Solar System Objects

    NASA Technical Reports Server (NTRS)

    Roddier, Francois; Owen, Toby

    1997-01-01

    Most solar system objects have never been observed at wavelengths longer than the R band with an angular resolution better than 1 sec. The Hubble Space Telescope itself has only recently been equipped to observe in the infrared. However, because of its small diameter, the angular resolution is lower than that one can now achieved from the ground with adaptive optics, and time allocated to planetary science is limited. We have been using adaptive optics (AO) on a 4-m class telescope to obtain 0.1 sec resolution images solar system objects at far red and near infrared wavelengths (0.7-2.5 micron) which best discriminate their spectral signatures. Our efforts has been put into areas of research for which high angular resolution is essential, such as the mapping of Titan and of large asteroids, the dynamics and composition of Neptune stratospheric clouds, the infrared photometry of Pluto, Charon, and close satellites previously undetected from the ground.

  15. Comparison of Color Model in Cotton Image Under Conditions of Natural Light

    NASA Astrophysics Data System (ADS)

    Zhang, J. H.; Kong, F. T.; Wu, J. Z.; Wang, S. W.; Liu, J. J.; Zhao, P.

    Although the color images contain a large amount of information reflecting the species characteristics, different color models also get different information. The selection of color models is the key to separating crops from background effectively and rapidly. Taking the cotton images collected under natural light as the object, we convert the color components of RGB color model, HSL color model and YIQ color model respectively. Then, we use subjective evaluation and objective evaluation methods, evaluating the 9 color components of conversion. It is concluded that the Q component of the soil, straw and plastic film region gray values remain the same without larger fluctuation when using subjective evaluation method. In the objective evaluation, we use the variance method, average gradient method, gray prediction objective evaluation error statistics method and information entropy method respectively to find the minimum numerical of Q color component suitable for background segmentation.

  16. Speckle image reconstruction of the adaptive optics solar images.

    PubMed

    Zhong, Libo; Tian, Yu; Rao, Changhui

    2014-11-17

    Speckle image reconstruction, in which the speckle transfer function (STF) is modeled as annular distribution according to the angular dependence of adaptive optics (AO) compensation and the individual STF in each annulus is obtained by the corresponding Fried parameter calculated from the traditional spectral ratio method, is used to restore the solar images corrected by AO system in this paper. The reconstructions of the solar images acquired by a 37-element AO system validate this method and the image quality is improved evidently. Moreover, we found the photometric accuracy of the reconstruction is field dependent due to the influence of AO correction. With the increase of angular separation of the object from the AO lockpoint, the relative improvement becomes approximately more and more effective and tends to identical in the regions far away the central field of view. The simulation results show this phenomenon is mainly due to the disparity of the calculated STF from the real AO STF with the angular dependence.

  17. Functional magnetic resonance imaging adaptation reveals a noncategorical representation of hue in early visual cortex

    PubMed Central

    Persichetti, Andrew S.; Thompson-Schill, Sharon L.; Butt, Omar H.; Brainard, David H.; Aguirre, Geoffrey K.

    2015-01-01

    Color names divide the fine-grained gamut of color percepts into discrete categories. A categorical transition must occur somewhere between the initial encoding of the continuous spectrum of light by the cones and the verbal report of the name of a color stimulus. Here, we used a functional magnetic resonance imaging (fMRI) adaptation experiment to examine the representation of hue in the early visual cortex. Our stimuli varied in hue between blue and green. We found in the early visual areas (V1, V2/3, and hV4) a smoothly increasing recovery from adaptation with increasing hue distance between adjacent stimuli during both passive viewing (Experiment 1) and active categorization (Experiment 2). We examined the form of the adaptation effect and found no evidence that a categorical representation mediates the release from adaptation for stimuli that cross the blue–green color boundary. Examination of the direct effect of stimulus hue on the fMRI response did, however, reveal an enhanced response to stimuli near the blue–green category border. This was largest in hV4 and when subjects were engaged in active categorization of the stimulus hue. In contrast with a recent report from another laboratory (Bird, Berens, Horner, & Franklin, 2014), we found no evidence for a categorical representation of color in the middle frontal gyrus. A post hoc whole-brain analysis, however, revealed several regions in the frontal cortex with a categorical effect in the adaptation response. Overall, our results support the idea that the representation of color in the early visual cortex is primarily fine grained and does not reflect color categories. PMID:26024465

  18. Optimal chroma-like channel design for passive color image splicing detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin

    2012-12-01

    Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.

  19. A quaternion-based spectral clustering method for color image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Jin, Lianghai; Liu, Hong; He, Zeng

    2011-11-01

    Spectral clustering method has been widely used in image segmentation. A key issue in spectral clustering is how to build the affinity matrix. When it is applied to color image segmentation, most of the existing methods either use Euclidean metric to define the affinity matrix, or first converting color-images into gray-level images and then use the gray-level images to construct the affinity matrix (component-wise method). However, it is known that Euclidean distances can not represent the color differences well and the component-wise method does not consider the correlation between color channels. In this paper, we propose a new method to produce the affinity matrix, in which the color images are first represented in quaternion form and then the similarities between color pixels are measured by quaternion rotation (QR) mechanism. The experimental results show the superiority of the new method.

  20. Adaptive optics retinal imaging: emerging clinical applications.

    PubMed

    Godara, Pooja; Dubis, Adam M; Roorda, Austin; Duncan, Jacque L; Carroll, Joseph

    2010-12-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy and spectral domain-optical coherence tomography provide clinicians with remarkably clear pictures of the living retina. Although the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, the same optics induce significant aberrations that obviate cellular-resolution imaging in most cases. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. When applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, retinal pigment epithelium cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here, we review some of the advances that were made possible with AO imaging of the human retina and discuss applications and future prospects for clinical imaging.

  1. [Automatic houses detection with color aerial images based on image segmentation].

    PubMed

    He, Pei-Pei; Wan, You-Chuan; Jiang, Peng-Rui; Gao, Xian-Jun; Qin, Jia-Xin

    2014-07-01

    In order to achieve housing automatic detection from high-resolution aerial imagery, the present paper utilized the color information and spectral characteristics of the roofing material, with the image segmentation theory, to study the housing automatic detection method. Firstly, This method proposed in this paper converts the RGB color space to HIS color space, uses the characteristics of each component of the HIS color space and the spectral characteristics of the roofing material for image segmentation to isolate red tiled roofs and gray cement roof areas, and gets the initial segmentation housing areas by using the marked watershed algorithm. Then, region growing is conducted in the hue component with the seed segment sample by calculating the average hue in the marked region. Finally through the elimination of small spots and rectangular fitting process to obtain a clear outline of the housing area. Compared with the traditional pixel-based region segmentation algorithm, the improved method proposed in this paper based on segment growing is in a one-dimensional color space to reduce the computation without human intervention, and can cater to the geometry information of the neighborhood pixels so that the speed and accuracy of the algorithm has been significantly improved. A case study was conducted to apply the method proposed in this paper to high resolution aerial images, and the experimental results demonstrate that this method has a high precision and rational robustness.

  2. Context cue-dependent saccadic adaptation in rhesus macaques cannot be elicited using color.

    PubMed

    Cecala, Aaron L; Smalianchuk, Ivan; Khanna, Sanjeev B; Smith, Matthew A; Gandhi, Neeraj J

    2015-07-01

    When the head does not move, rapid movements of the eyes called saccades are used to redirect the line of sight. Saccades are defined by a series of metrical and kinematic (evolution of a movement as a function of time) relationships. For example, the amplitude of a saccade made from one visual target to another is roughly 90% of the distance between the initial fixation point (T0) and the peripheral target (T1). However, this stereotypical relationship between saccade amplitude and initial retinal error (T1-T0) may be altered, either increased or decreased, by surreptitiously displacing a visual target during an ongoing saccade. This form of motor learning (called saccadic adaptation) has been described in both humans and monkeys. Recent experiments in humans and monkeys have suggested that internal (proprioceptive) and external (target shape, color, and/or motion) cues may be used to produce context-dependent adaptation. We tested the hypothesis that an external contextual cue (target color) could be used to evoke differential gain (actual saccade/initial retinal error) states in rhesus monkeys. We did not observe differential gain states correlated with target color regardless of whether targets were displaced along the same vector as the primary saccade or perpendicular to it. Furthermore, this observation held true regardless of whether adaptation trials using various colors and intrasaccade target displacements were randomly intermixed or presented in short or long blocks of trials. These results are consistent with hypotheses that state that color cannot be used as a contextual cue and are interpreted in light of previous studies of saccadic adaptation in both humans and monkeys.

  3. Adaptive system for eye-fundus imaging

    SciTech Connect

    Larichev, A V; Ivanov, P V; Iroshnikov, N G; Shmalgauzen, V I; Otten, L J

    2002-10-31

    A compact adaptive system capable of imaging a human-eye retina with a spatial resolution as high as 6 {mu}m and a field of view of 15{sup 0} is developed. It is shown that a modal bimorph corrector with nonlocalised response functions provides the efficient suppression of dynamic aberrations of a human eye. The residual root-mean-square error in correction of aberrations of a real eye with nonparalysed accommodation lies in the range of 0.1 - 0.15 {mu}m.

  4. Landsat ETM+ False-Color Image Mosaics of Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2007-01-01

    In 2005, the U.S. Agency for International Development and the U.S. Trade and Development Agency contracted with the U.S. Geological Survey to perform assessments of the natural resources within Afghanistan. The assessments concentrate on the resources that are related to the economic development of that country. Therefore, assessments were initiated in oil and gas, coal, mineral resources, water resources, and earthquake hazards. All of these assessments require geologic, structural, and topographic information throughout the country at a finer scale and better accuracy than that provided by the existing maps, which were published in the 1970's by the Russians and Germans. The very rugged terrain in Afghanistan, the large scale of these assessments, and the terrorist threat in Afghanistan indicated that the best approach to provide the preliminary assessments was to use remotely sensed, satellite image data, although this may also apply to subsequent phases of the assessments. Therefore, the first step in the assessment process was to produce satellite image mosaics of Afghanistan that would be useful for these assessments. This report discusses the production of the Landsat false-color image database produced for these assessments, which was produced from the calibrated Landsat ETM+ image mosaics described by Davis (2006).

  5. Separate channels for processing form, texture, and color: evidence from FMRI adaptation and visual object agnosia.

    PubMed

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-10-01

    Previous neuroimaging research suggests that although object shape is analyzed in the lateral occipital cortex, surface properties of objects, such as color and texture, are dealt with in more medial areas, close to the collateral sulcus (CoS). The present study sought to determine whether there is a single medial region concerned with surface properties in general or whether instead there are multiple foci independently extracting different surface properties. We used stimuli varying in their shape, texture, or color, and tested healthy participants and 2 object-agnosic patients, in both a discrimination task and a functional MR adaptation paradigm. We found a double dissociation between medial and lateral occipitotemporal cortices in processing surface (texture or color) versus geometric (shape) properties, respectively. In Experiment 2, we found that the medial occipitotemporal cortex houses separate foci for color (within anterior CoS and lingual gyrus) and texture (caudally within posterior CoS). In addition, we found that areas selective for shape, texture, and color individually were quite distinct from those that respond to all of these features together (shape and texture and color). These latter areas appear to correspond to those associated with the perception of complex stimuli such as faces and places.

  6. Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images

    SciTech Connect

    Kimmel, R.; Malladi, R.; Sochen, N.

    1997-02-01

    A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as a surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.

  7. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2016-01-01

    Future direct-imaging exoplanet missions such as WFIRST will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These "cold" exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  8. Colors of Alien Worlds from Direct Imaging Exoplanet Missions

    NASA Astrophysics Data System (ADS)

    Hu, Renyu

    2015-08-01

    Future direct-imaging exoplanet missions such as WFIRST/AFTA, Exo-C, and Exo-S will measure the reflectivity of exoplanets at visible wavelengths. Most of the exoplanets to be observed will be located further away from their parent stars than is Earth from the Sun. These “cold” exoplanets have atmospheric environments conducive for the formation of water and/or ammonia clouds, like Jupiter in the Solar System. I find the mixing ratio of methane and the pressure level of the uppermost cloud deck on these planets can be uniquely determined from their reflection spectra, with moderate spectral resolution, if the cloud deck is between 0.6 and 1.5 bars. The existence of this unique solution is useful for exoplanet direct imaging missions for several reasons. First, the weak bands and strong bands of methane enable the measurement of the methane mixing ratio and the cloud pressure, although an overlying haze layer can bias the estimate of the latter. Second, the cloud pressure, once derived, yields an important constraint on the internal heat flux from the planet, and thus indicating its thermal evolution. Third, water worlds having H2O-dominated atmospheres are likely to have water clouds located higher than the 10-3 bar pressure level, and muted spectral absorption features. These planets would occupy a confined phase space in the color-color diagrams, likely distinguishable from H2-rich giant exoplanets by broadband observations. Therefore, direct-imaging exoplanet missions may offer the capability to broadly distinguish H2-rich giant exoplanets versus H2O-rich super-Earth exoplanets, and to detect ammonia and/or water clouds and methane gas in their atmospheres.

  9. Radar Image with Color as Height, Old Khmer Road, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image shows the Old Khmer Road (Inrdratataka-Bakheng causeway) in Cambodia extending from the 9th Century A.D. capitol city of Hariharalaya in the lower right portion of the image to the later 10th Century AD capital of Yasodharapura. This was located in the vicinity of Phnom Bakheng (not shown in image). The Old Road is believed to be more than 1000 years old. Its precise role and destination within the 'new' city at Angkor is still being studied by archeologists. But wherever it ended, it not only offered an immense processional way for the King to move between old and new capitols, it also linked the two areas, widening the territorial base of the Khmer King. Finally, in the past and today, the Old Road managed the waters of the floodplain. It acted as a long barrage or dam for not only the natural streams of the area but also for the changes brought to the local hydrology by Khmer population growth.

    The image was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Image brightness is from the P-band (68 cm wavelength) radar backscatter, which is a measure of how much energy the surface reflects back towards the radar. Color is used to represent elevation contours. One cycle of color represents 20 m of elevation change, that is going from blue to red to yellow to green and back to blue again corresponds to 20 m of elevation change. Image dimensions are approximately 3.4 km by 3.5 km with a pixel spacing of 5 m. North is at top.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  10. Evaluation of color categorization for representing vehicle colors

    NASA Astrophysics Data System (ADS)

    Zeng, Nan; Crisman, Jill D.

    1997-02-01

    This paper evaluates the accuracy of three color categorization techniques in describing vehicles colors for a system, AutoColor, which we are developing for Intelligent Transportation Systems. Color categorization is used to efficiently represent 24-bit color images with up to 8 bits of color information. Our inspiration for color categorization is based on the fact that humans typically use only a few color names to describe the numerous colors they perceive. Our Crayon color categorization technique uses a naming scheme for digitized colors which is roughly based on human names for colors. The fastest and most straight forward method for compacting a 24-bit representation into an 8-bit representation is to use the most significant bits (MSB) to represent the colors. In addition, we have developed an Adaptive color categorization technique which can derive a set of color categories for the current imaging conditions. In this paper, we detail the three color categorization techniques, Crayon, MSB, and Adaptive, and we evaluate their performance on representing vehicle colors in our AutoColor system.

  11. New Orleans Topography, Radar Image with Colored Height

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site] Click on the image for the animation

    About the animation: This simulated view of the potential effects of storm surge flooding on Lake Pontchartrain and the New Orleans area was generated with data from the Shuttle Radar Topography Mission. Although it is protected by levees and sea walls against storm surges of 18 to 20 feet, much of the city is below sea level, and flooding due to storm surges caused by major hurricanes is a concern. The animation shows regions that, if unprotected, would be inundated with water. The animation depicts flooding in one-meter increments.

    About the image: The city of New Orleans, situated on the southern shore of Lake Pontchartrain, is shown in this radar image from the Shuttle Radar Topography Mission (SRTM). In this image bright areas show regions of high radar reflectivity, such as from urban areas, and elevations have been coded in color using height data also from the SRTM mission. Dark green colors indicate low elevations, rising through yellow and tan, to white at the highest elevations.

    New Orleans is near the center of this scene, between the lake and the Mississippi River. The line spanning the lake is the Lake Pontchartrain Causeway, the world's longest overwater highway bridge. Major portions of the city of New Orleans are actually below sea level, and although it is protected by levees and sea walls that are designed to protect against storm surges of 18 to 20 feet, flooding during storm surges associated with major hurricanes is a significant concern.

    Data used in this image were acquired by the Shuttle Radar Topography Mission aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface

  12. Survey of contemporary trends in color image segmentation

    NASA Astrophysics Data System (ADS)

    Vantaram, Sreenath Rao; Saber, Eli

    2012-10-01

    In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.

  13. Retinal imaging using adaptive optics technology☆

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  14. Retinal imaging using adaptive optics technology.

    PubMed

    Kozak, Igor

    2014-04-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started.

  15. Colorization and Automated Segmentation of Human T2 MR Brain Images for Characterization of Soft Tissues

    PubMed Central

    Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex

    2012-01-01

    Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421

  16. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  17. Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Zhou, Dongming; Yao, Shaowen; Nie, Rencan; Yu, Chuanbo; Ding, Tingting

    2016-04-01

    In CIELab color space, we propose a remote sensing image fusion technique based on nonsubsampled shearlet transform (NSST) and pulse coupled neural network (PCNN), which aim to improve the efficiency and performance of the remote sensing image fusion by combining the excellent properties of the two methods. First, panchromatic (PAN) and multispectral (MS) are transformed into CIELab color space to get different color components. Second, PAN and L component of MS are decomposed by the NSST to obtain corresponding the low-frequency coefficients and high-frequency coefficients. Third, the low-frequency coefficients are fused by intersecting cortical model (ICM); the high-frequency coefficients are divided into several sub-blocks to calculate the average gradient (AG), and the linking strength β of PCNN model is determined by the AG, so that the parameters β can be adaptively set according to the quality of the sub-block images, then the sub-blocks image are input into PCNN to get the oscillation frequency graph (OFG), the method can get the fused high-frequency coefficients according to the OFG. Finally, the fused L component is obtained by inverse NSST, and the fused RGB color image is obtained through inverse CIELab transform. The experimental results demonstrate that the proposed method provide better effect compared with other common methods.

  18. Color-coded LED microscopy for multi-contrast and quantitative phase-gradient imaging

    PubMed Central

    Lee, Donghak; Ryu, Suho; Kim, Uihan; Jung, Daeseong; Joo, Chulmin

    2015-01-01

    We present a multi-contrast microscope based on color-coded illumination and computation. A programmable three-color light-emitting diode (LED) array illuminates a specimen, in which each color corresponds to a different illumination angle. A single color image sensor records light transmitted through the specimen, and images at each color channel are then separated and utilized to obtain bright-field, dark-field, and differential phase contrast (DPC) images simultaneously. Quantitative phase imaging is also achieved based on DPC images acquired with two different LED illumination patterns. The multi-contrast and quantitative phase imaging capabilities of our method are demonstrated by presenting images of various transparent biological samples. PMID:26713205

  19. Extreme Adaptive Optics Planet Imager: XAOPI

    SciTech Connect

    Macintosh, B A; Graham, J; Poyneer, L; Sommargren, G; Wilhelmsen, J; Gavel, D; Jones, S; Kalas, P; Lloyd, J; Makidon, R; Olivier, S; Palmer, D; Patience, J; Perrin, M; Severson, S; Sheinis, A; Sivaramakrishnan, A; Troy, M; Wallace, K

    2003-09-17

    Ground based adaptive optics is a potentially powerful technique for direct imaging detection of extrasolar planets. Turbulence in the Earth's atmosphere imposes some fundamental limits, but the large size of ground-based telescopes compared to spacecraft can work to mitigate this. We are carrying out a design study for a dedicated ultra-high-contrast system, the eXtreme Adaptive Optics Planet Imager (XAOPI), which could be deployed on an 8-10m telescope in 2007. With a 4096-actuator MEMS deformable mirror it should achieve Strehl >0.9 in the near-IR. Using an innovative spatially filtered wavefront sensor, the system will be optimized to control scattered light over a large radius and suppress artifacts caused by static errors. We predict that it will achieve contrast levels of 10{sup 7}-10{sup 8} at angular separations of 0.2-0.8 inches around a large sample of stars (R<7-10), sufficient to detect Jupiter-like planets through their near-IR emission over a wide range of ages and masses. We are constructing a high-contrast AO testbed to verify key concepts of our system, and present preliminary results here, showing an RMS wavefront error of <1.3 nm with a flat mirror.

  20. Toward a unified color space for perception-based image processing.

    PubMed

    Lissner, Ingmar; Urban, Philipp

    2012-03-01

    Image processing methods that utilize characteristics of the human visual system require color spaces with certain properties to operate effectively. After analyzing different types of perception-based image processing problems, we present a list of properties that a unified color space should have. Due to contradictory perceptual phenomena and geometric issues, a color space cannot incorporate all these properties. We therefore identify the most important properties and focus on creating opponent color spaces without cross contamination between color attributes (i.e., lightness, chroma, and hue) and with maximum perceptual uniformity induced by color-difference formulas. Color lookup tables define simple transformations from an initial color space to the new spaces. We calculate such tables using multigrid optimization considering the Hung and Berns data of constant perceived hue and the CMC, CIE94, and CIEDE2000 color-difference formulas. The resulting color spaces exhibit low cross contamination between color attributes and are only slightly less perceptually uniform than spaces optimized exclusively for perceptual uniformity. We compare the CIEDE2000-based space with commonly used color spaces in two examples of perception-based image processing. In both cases, standard methods show improved results if the new space is used. All color-space transformations and examples are provided as MATLAB codes on our website.

  1. An investigation on the intra-sample distribution of cotton color by using image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  2. 2-COLOR Pupil Imaging Method to Detect Stellar Oscillations

    NASA Astrophysics Data System (ADS)

    Costantino, Sigismondi; Alessandro, Cacciani; Mauro, Dolci; Stuart, Jeffries; Eric, Fossat; Ludovico, Cesario; Paolo, Rapex; Luca, Bertello; Ferenc, Varadi; Wolfgang, Finsterle

    Stellar intensity oscillations from the ground are strongly affected by atmospheric noise. For solar-type stars even Antarctic scintillation noise is still overwhelming. We proposed and tested a differential method that images on the same CCD detector two-color pupils of the telescope in order to compensate for intensity sky fluctuations guiding and saturation problems. SOHO data reveal that our method has an efficiency of 70% respect to the absolute amplitude variations. Using two instruments at Dome C and South Pole we can further minimize atmospheric color noise with cross-spectrum methods. This way we also decrease the likelihood of gaps in the data string due to bad weather. Observationally while waiting for the South Pole/Dome-C sites we are carrying on tests from available telescopes and Big Bear Mt. Wilson Teramo Milano. On the data analysis side we use the Random Lag Singular Cross-Spectrum Analysis which eliminates noise from the observed signal better than traditional Fourier transform. This method is also well-suited for extracting common oscillatory components from two or more observations including their relative phases as we are planning to do

  3. Monochrome Image Presentation and Segmentation Based on the Pseudo-Color and PCT Transformations

    DTIC Science & Technology

    2001-10-25

    image classification and pattern recognition, and has received extensive attention in medical image such as MRI brain image segmentation [6]. FCM is...in pseudo color image segmentation, and comparisons were made using mammograph and MRI brain images. Finally, an image edge detection has also been...methods. (a) MRI T1 image; (b) MRI T2 image; (c) PCT- guided segmentation; (d) FCM -based segmentation (NK=4, NC=2). D. Edge detection in MRI image It

  4. Study of factors involved in tongue color diagnosis by kampo medical practitioners using the farnsworth-munsell 100 hue test and tongue color images.

    PubMed

    Oji, Takeshi; Namiki, Takao; Nakaguchi, Toshiya; Ueda, Keigo; Takeda, Kanako; Nakamura, Michimi; Okamoto, Hideki; Hirasaki, Yoshiro

    2014-01-01

    In traditional Japanese medicine (Kampo medicine), tongue color is important in discerning a patient's constitution and medical conditions. However, tongue color diagnosis is susceptible to the subjective factors of the observer. To investigate factors involved in tongue color diagnosis, both color discrimination and tongue color diagnosis were researched in 68 Kampo medical practitioners. Color discrimination was studied by the Farnsworth-Munsell 100 Hue test, and tongue color diagnosis was studied by 84 tongue images. We found that overall color discrimination worsened with aging. However, the color discrimination related to tongue color regions was maintained in subjects with 10 or more years of Kampo experience. On the other hand, tongue color diagnosis significantly differed between subjects with <10 years of experience and ≥10 years of experience. Practitioners with ≥10 years of experience could maintain a consistent diagnosis of tongue color regardless of their age.

  5. Using Color and Grayscale Images to Teach Histology to Color-Deficient Medical Students

    ERIC Educational Resources Information Center

    Rubin, Lindsay R.; Lackey, Wendy L.; Kennedy, Frances A.; Stephenson, Robert B.

    2009-01-01

    Examination of histologic and histopathologic microscopic sections relies upon differential colors provided by staining techniques, such as hematoxylin and eosin, to delineate normal tissue components and to identify pathologic alterations in these components. Given the prevalence of color deficiency (commonly called "color blindness")…

  6. Image mosaicking based on feature points using color-invariant values

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Chang; Kwon, Oh-Seol; Ko, Kyung-Woo; Lee, Ho-Young; Ha, Yeong-Ho

    2008-02-01

    In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

  7. Real-time color imaging system for NIR and visible based on neighborhood statistics lookup table

    NASA Astrophysics Data System (ADS)

    Wei, Sheng-yi; Jin, Zhen; Wang, Ling-xue; He, Yu; Zhou, Xing-guang

    2015-11-01

    The near infrared radiation is the main component of the solar radiation. It's widely used in the remote sensing, nightvision, spectral detection et al. The NIR images are usually monochromatic, while color images are benefit for scene reconstruction and object detection. In this paper a new computed color imaging method based on the neighborhood statistics lookup table for NIR and visible was presented, and its implementation system was built. The neighborhood statistics lookup table was established based on the neighborhood statistical properties of the image. The use of the neighborhood statistical properties can enriched the color transmission variables of the gray image. It obtained a colorful lookup table that could improve the effects of the color transfer and make the colorized image more natural. The proposed lookup table could also transfer the color details well for the neighborhood statistical information representing the texture of the image. The results shows that this method yields a color image with natural color appearance and it can be implemented in real-time.

  8. Examining the Pathologic Adaptation Model of Community Violence Exposure in Male Adolescents of Color

    PubMed Central

    Gaylord-Harden, Noni K.; So, Suzanna; Bai, Grace J.; Henry, David B.; Tolan, Patrick H.

    2017-01-01

    The current study examined a model of desensitization to community violence exposure—the pathologic adaptation model—in male adolescents of color. The current study included 285 African American (61%) and Latino (39%) male adolescents (W1 M age = 12.41) from the Chicago Youth Development Study to examine the longitudinal associations between community violence exposure, depressive symptoms, and violent behavior. Consistent with the pathologic adaptation model, results indicated a linear, positive association between community violence exposure in middle adolescence and violent behavior in late adolescence, as well as a curvilinear association between community violence exposure in middle adolescence and depressive symptoms in late adolescence, suggesting emotional desensitization. Further, these effects were specific to cognitive-affective symptoms of depression and not somatic symptoms. Emotional desensitization outcomes, as assessed by depressive symptoms, can occur in male adolescents of color exposed to community violence and these effects extend from middle adolescence to late adolescence. PMID:27653968

  9. Color model comparative analysis for breast cancer diagnosis using H and E stained images

    NASA Astrophysics Data System (ADS)

    Li, Xingyu; Plataniotis, Konstantinos N.

    2015-03-01

    Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.

  10. Joint sparse coding based spatial pyramid matching for classification of color medical image.

    PubMed

    Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin

    2015-04-01

    Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification.

  11. Quantifying Human Visible Color Variation from High Definition Digital Images of Orb Web Spiders

    PubMed Central

    Ajuria Ibarra, Helena; Rao, Dinesh

    2016-01-01

    Digital processing and analysis of high resolution images of 30 individuals of the orb web spider Verrucosa arenata were performed to extract and quantify human visible colors present on the dorsal abdomen of this species. Color extraction was performed with minimal user intervention using an unsupervised algorithm to determine groups of colors on each individual spider, which was then analyzed in order to quantify and classify the colors obtained, both spatially and using energy and entropy measures of the digital images. Analysis shows that the colors cover a small region of the visible spectrum, are not spatially homogeneously distributed over the patterns and from an entropic point of view, colors that cover a smaller region on the whole pattern carry more information than colors covering a larger region. This study demonstrates the use of processing tools to create automatic systems to extract valuable information from digital images that are precise, efficient and helpful for the understanding of the underlying biology. PMID:27902724

  12. Best Color Image of Jupiter's Little Red Spot

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This amazing color portrait of Jupiter's 'Little Red Spot' (LRS) combines high-resolution images from the New Horizons Long Range Reconnaissance Imager (LORRI), taken at 03:12 UT on February 27, 2007, with color images taken nearly simultaneously by the Wide Field Planetary Camera 2 (WFPC2) on the Hubble Space Telescope. The LORRI images provide details as fine as 9 miles across (15 kilometers), which is approximately 10 times better than Hubble can provide on its own. The improved resolution is possible because New Horizons was only 1.9 million miles (3 million kilometers) away from Jupiter when LORRI snapped its pictures, while Hubble was more than 500 million miles (800 million kilometers) away from the Gas Giant planet.

    The Little Red Spot is the second largest storm on Jupiter, roughly 70% the size of the Earth, and it started turning red in late-2005. The clouds in the Little Red Spot rotate counterclockwise, or in the anticyclonic direction, because it is a high-pressure region. In that sense, the Little Red Spot is the opposite of a hurricane on Earth, which is a low-pressure region - and, of course, the Little Red Spot is far larger than any hurricane on Earth.

    Scientists don't know exactly how or why the Little Red Spot turned red, though they speculate that the change could stem from a surge of exotic compounds from deep within Jupiter, caused by an intensification of the storm system. In particular, sulfur-bearing cloud droplets might have been propelled about 50 kilometers into the upper level of ammonia clouds, where brighter sunlight bathing the cloud tops released the red-hued sulfur embedded in the droplets, causing the storm to turn red. A similar mechanism has been proposed for the Little Red Spot's 'older brother,' the Great Red Spot, a massive energetic storm system that has persisted for over a century.

    New Horizons is providing an opportunity to examine an 'infant' red storm system in detail, which may help scientists

  13. Adaptive image steganography using contourlet transform

    NASA Astrophysics Data System (ADS)

    Fakhredanesh, Mohammad; Rahmati, Mohammad; Safabakhsh, Reza

    2013-10-01

    This work presents adaptive image steganography methods which locate suitable regions for embedding by contourlet transform, while embedded message bits are carried in discrete cosine transform coefficients. The first proposed method utilizes contourlet transform coefficients to select contour regions of the image. In the embedding procedure, some of the contourlet transform coefficients may change which may cause errors at the message extraction phase. We propose a novel iterative procedure to resolve such problems. In addition, we have proposed an improved version of the first method in which it uses an advanced embedding operation to boost its security. Experimental results show that the proposed base method is an imperceptible image steganography method with zero retrieval error rate. Comparisons with other steganography methods which utilize contourlet transform show that our proposed method is able to retrieve all messages perfectly, whereas the others fail. Moreover, the proposed method outperforms the ContSteg method in terms of PSNR and the higher-order statistics steganalysis method. Experimental evaluations of our methods with the well known DCT-based steganography algorithms have demonstrated that our improved method has superior performance in terms of PSNR and SSIM, and is more secure against the steganalysis attack.

  14. Color enhancement and image defogging in HSI based on Retinex model

    NASA Astrophysics Data System (ADS)

    Gao, Han; Wei, Ping; Ke, Jun

    2015-08-01

    Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.

  15. Image enhancement and color constancy for a vehicle-mounted change detection system

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David

    2016-10-01

    Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.

  16. Compression of digital color images based on the model of spatiochromatic multiplexing of human vision

    NASA Astrophysics Data System (ADS)

    Martinez-Uriegas, Eugenio; Peters, John D.; Crane, Hewitt D.

    1994-05-01

    SRI International has developed a new technique for compression of digital color images on the basis of its research of multiplexing processes in human color vision. The technique can be used independently, or in combination with standard JPEG or any other monochrome procedure, to produce color image compression systems that are simpler than conventional implementations. Specific applications are currently being developed within four areas: (1) simplification of processing in systems that compress RGB digital images, (2) economic upgrading of black and white image capturing systems to full color, (3) triplication of spatial resolution of high-end image capturing systems currently designed for 3-plane color capture, and (4) even greater simplification of processing in systems for dynamic images.

  17. Short-Term Neural Adaptation to Simultaneous Bifocal Images

    PubMed Central

    Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Sawides, Lucie; Marcos, Susana

    2014-01-01

    Simultaneous vision is an increasingly used solution for the correction of presbyopia (the age-related loss of ability to focus near images). Simultaneous Vision corrections, normally delivered in the form of contact or intraocular lenses, project on the patient's retina a focused image for near vision superimposed with a degraded image for far vision, or a focused image for far vision superimposed with the defocused image of the near scene. It is expected that patients with these corrections are able to adapt to the complex Simultaneous Vision retinal images, although the mechanisms or the extent to which this happens is not known. We studied the neural adaptation to simultaneous vision by studying changes in the Natural Perceived Focus and in the Perceptual Score of image quality in subjects after exposure to Simultaneous Vision. We show that Natural Perceived Focus shifts after a brief period of adaptation to a Simultaneous Vision blur, similar to adaptation to Pure Defocus. This shift strongly correlates with the magnitude and proportion of defocus in the adapting image. The magnitude of defocus affects perceived quality of Simultaneous Vision images, with 0.5 D defocus scored lowest and beyond 1.5 D scored “sharp”. Adaptation to Simultaneous Vision shifts the Perceptual Score of these images towards higher rankings. Larger improvements occurred when testing simultaneous images with the same magnitude of defocus as the adapting images, indicating that wearing a particular bifocal correction improves the perception of images provided by that correction. PMID:24664087

  18. Color-dependent banding characterization and simulation on natural document images

    NASA Astrophysics Data System (ADS)

    Hu, Sirui; Nachlieli, Hila; Shaked, Doron; Shiffman, Smadar; Allebach, Jan P.

    2012-01-01

    Print defects like banding from a digital press involve not only luminance variation, but also chrominance variation. As digital presses place one color separation at a time, the contrast and spatial pattern of the print defects are color-space dependent. Characterizing the color-dependent features of the banding signal enables us to simulate the banding on natural document images in a more accurate way that matches the characteristics of the banding generation mechanism within the digital press. A framework is described for color-dependent banding characterization including the following steps: printing and scanning uniform patches that sample colorant combinations throughout the input document sRGB color space, extracting banding signals in the CMYK color space of the target device, and modeling the banding features in a perceptually uniform color space. We obtain a full banding features LUT for every color point in the input sRGB space by interpolating banding features extracted from measured color points. The color-dependent banding simulation framework is developed based on the banding features LUT. Using the information contained in this LUT, a single banding prototype signal is modulated in a color-space-dependent fashion that varies spatially across the natural document image. Proper execution of the framework of banding characterization and simulation requires careful calibration of each system component, as well as implementation of a complete color management pipeline.

  19. Genomic architecture of adaptive color pattern divergence and convergence in Heliconius butterflies

    PubMed Central

    Supple, Megan A.; Hines, Heather M.; Dasmahapatra, Kanchon K.; Lewis, James J.; Nielsen, Dahlia M.; Lavoie, Christine; Ray, David A.; Salazar, Camilo; McMillan, W. Owen; Counterman, Brian A.

    2013-01-01

    Identifying the genetic changes driving adaptive variation in natural populations is key to understanding the origins of biodiversity. The mosaic of mimetic wing patterns in Heliconius butterflies makes an excellent system for exploring adaptive variation using next-generation sequencing. In this study, we use a combination of techniques to annotate the genomic interval modulating red color pattern variation, identify a narrow region responsible for adaptive divergence and convergence in Heliconius wing color patterns, and explore the evolutionary history of these adaptive alleles. We use whole genome resequencing from four hybrid zones between divergent color pattern races of Heliconius erato and two hybrid zones of the co-mimic Heliconius melpomene to examine genetic variation across 2.2 Mb of a partial reference sequence. In the intergenic region near optix, the gene previously shown to be responsible for the complex red pattern variation in Heliconius, population genetic analyses identify a shared 65-kb region of divergence that includes several sites perfectly associated with phenotype within each species. This region likely contains multiple cis-regulatory elements that control discrete expression domains of optix. The parallel signatures of genetic differentiation in H. erato and H. melpomene support a shared genetic architecture between the two distantly related co-mimics; however, phylogenetic analysis suggests mimetic patterns in each species evolved independently. Using a combination of next-generation sequencing analyses, we have refined our understanding of the genetic architecture of wing pattern variation in Heliconius and gained important insights into the evolution of novel adaptive phenotypes in natural populations. PMID:23674305

  20. DAF: differential ACE filtering image quality assessment by automatic color equalization

    NASA Astrophysics Data System (ADS)

    Ouni, S.; Chambah, M.; Saint-Jean, C.; Rizzi, A.

    2008-01-01

    Ideally, a quality assessment system would perceive and measure image or video impairments just like a human being. But in reality, objective quality metrics do not necessarily correlate well with perceived quality [1]. Plus, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their usage in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements [2,3]. Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration. The ACE method, for Automatic Color Equalization [4,6], is an algorithm for digital images unsupervised enhancement. It is based on a new computational approach that tries to model the perceptual response of our vision system merging the Gray World and White Patch equalization mechanisms in a global and local way. Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information from the environment efficaciously. Moreover ACE can be run in an unsupervised manner. Hence it is very useful as a digital film restoration tool since no a priori information is available. In this paper we deepen the investigation of using the ACE algorithm as a basis for a reference free image quality evaluation. This new metric called DAF for Differential ACE Filtering [7] is an objective quality measure that can be used in several image restoration and image quality assessment systems. In this paper, we compare on different image databases, the results obtained with DAF and with some subjective image quality assessments (Mean Opinion Score MOS as measure of perceived image quality). We study also the correlation between objective measure and MOS. In our experiments, we have used for the first image

  1. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  2. Adaptive Optics Imaging and Spectroscopy of Neptune

    NASA Technical Reports Server (NTRS)

    Johnson, Lindley (Technical Monitor); Sromovsky, Lawrence A.

    2005-01-01

    OBJECTIVES: We proposed to use high spectral resolution imaging and spectroscopy of Neptune in visible and near-IR spectral ranges to advance our understanding of Neptune s cloud structure. We intended to use the adaptive optics (AO) system at Mt. Wilson at visible wavelengths to try to obtain the first groundbased observations of dark spots on Neptune; we intended to use A 0 observations at the IRTF to obtain near-IR R=2000 spatially resolved spectra and near-IR A0 observations at the Keck observatory to obtain the highest spatial resolution studies of cloud feature dynamics and atmospheric motions. Vertical structure of cloud features was to be inferred from the wavelength dependent absorption of methane and hydrogen,

  3. Radar Image with Color as Height, Hariharalaya, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color--from blue to red to yellow to green and back to blue again--represents 10 meters (32.8 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data. Built, operated and managed by JPL, AIRSAR is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.

  4. Hue-preserving local contrast enhancement and illumination compensation for outdoor color images

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David; Christnacher, Frank

    2015-10-01

    Real-time applications in the field of security and defense use dynamic color camera systems to gain a better understanding of outdoor scenes. To enhance details and improve the visibility in images it is required to per- form local image processing, and to reduce lightness and color inconsistencies between images acquired under different illumination conditions it is required to compensate illumination effects. We introduce an automatic hue-preserving local contrast enhancement and illumination compensation approach for outdoor color images. Our approach is based on a shadow-weighted intensity-based Retinex model which enhances details and compensates the illumination effect on the lightness of an image. The Retinex model exploits information from a shadow detection approach to reduce lightness halo artifacts on shadow boundaries. We employ a hue-preserving color transformation to obtain a color image based on the original color information. To reduce color inconsistencies between images acquired under different illumination conditions we process the saturation using a scaling function. The approach has been successfully applied to static and dynamic color image sequences of outdoor scenes and an experimental comparison with previous Retinex-based approaches has been carried out.

  5. Improving information perception from digital images for users with dichromatic color vision

    NASA Astrophysics Data System (ADS)

    Shayeghpour, Omid; Nyström, Daniel; Gooran, Sasan

    2014-01-01

    Color vision deficiency (CVD) is the inability, or limited ability, to recognize colors and discriminate between them. A person with this condition perceives a narrower range of colors compared to a person with normal color vision. In this study we concentrate on recoloring digital images in such a way that users with CVD, especially dichromats, perceive more details from the recolored images compared to the original ones. During this color transformation process, the goal is to keep the overall contrast of the image constant, while adjusting the colors that might cause confusion for the CVD user. In this method, RGB values at each pixel of the image are first converted into HSV values and, based on pre-defined rules, the problematic colors are adjusted into colors that are perceived better by the user. Comparing the simulation of the original image, as it would be perceived by a dichromat, with the same dichromatic simulation on the recolored image, clearly shows that our method can eliminate a lot of confusion for the user and convey more details. Moreover, an online questionnaire was created and a group of 39 CVD users confirmed that the transformed images allow them to perceive more information compared to the original images.

  6. Offset-sparsity decomposition for enhancement of color microscopic image of stained specimen in histopathology: further results

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana

    2016-03-01

    Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].

  7. Radar Image with Color as Height, Sman Teng, Temple, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Cambodia's Angkor region, taken by NASA's Airborne Synthetic Aperture Radar (AIRSAR), reveals a temple (upper-right) not depicted on early 19th Century French archeological survey maps and American topographic maps. The temple, known as 'Sman Teng,' was known to the local Khmer people, but had remained unknown to historians due to the remoteness of its location. The temple is thought to date to the 11th Century: the heyday of Angkor. It is an important indicator of the strategic and natural resource contributions of the area northwest of the capitol, to the urban center of Angkor. Sman Teng, the name designating one of the many types of rice enjoyed by the Khmer, was 'discovered' by a scientist at NASA's Jet Propulsion Laboratory, Pasadena, Calif., working in collaboration with an archaeological expert on the Angkor region. Analysis of this remote area was a true collaboration of archaeology and technology. Locating the temple of Sman Teng required the skills of scientists trained to spot the types of topographic anomalies that only radar can reveal.

    This image, with a pixel spacing of 5 meters (16.4 feet), depicts an area of approximately 5 by 4.7 kilometers (3.1 by 2.9 miles). North is at top. Image brightness is from the P-band (68 centimeters, or 26.8 inches) wavelength radar backscatter, a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 25 meters (82 feet) of elevation change, so going from blue to red to yellow to green and back to blue again corresponds to 25 meters (82 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate elevation data

  8. Radar Image with Color as Height, Nokor Pheas Trapeng, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Nokor Pheas Trapeng is the name of the large black rectangular feature in the center-bottom of this image, acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Its Khmer name translates as 'Tank of the City of Refuge'. The immense tank is a typical structure built by the Khmer for water storage and control, but its size is unusually large. This suggests, as does 'city' in its name, that in ancient times this area was far more prosperous than today.

    A visit to this remote, inaccessible site was made in December 1998. The huge water tank was hardly visible. From the radar data we knew that the tank stretched some 500 meters (1,640 feet) from east to west. However, between all the plants growing on the surface of the water and the trees and other vegetation in the area, the water tank blended with the surrounding topography. Among the vegetation, on the northeast of the tank, were remains of an ancient temple and a spirit shrine. So although far from the temples of Angkor, to the southeast, the ancient water structure is still venerated by the local people.

    The image covers an area approximately 9.5 by 8.7 kilometers (5.9 by 5.4 miles) with a pixel spacing of 5 meters (16.4 feet). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches) wavelength radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going from blue to red to yellow to green and back to blue again corresponds to 20 meters (65.6 feet) of elevation change.

    AIRSAR flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. In the TOPSAR mode, AIRSAR collects radar interferometry data from two spatially separated antennas (2.6 meters, or 8.5 feet). Information from the two antennas is used to form radar backscatter imagery and to generate highly accurate

  9. Simplified color image-processing system using a dichromated gelatin holographic element.

    PubMed

    Jiang, Y G

    1982-09-01

    A simplified color image-processing system using a dichromated gelatin hololens is described. Only two conventional lenses are needed in this system, and their apertures are much smaller than the image to be processed. The processed image has a resolution of more than 200 l/mm. Since the system does not use a color filter, the processed image can be clear and of high fidelity.

  10. A new human perception-based over-exposure detection method for color images.

    PubMed

    Yoon, Yeo-Jin; Byun, Keun-Yung; Lee, Dae-Hong; Jung, Seung-Won; Ko, Sung-Jea

    2014-09-15

    To correct an over-exposure within an image, the over-exposed region (OER) must first be detected. Detecting the OER accurately has a significant effect on the performance of the over-exposure correction. However, the results of conventional OER detection methods, which generally use the brightness and color information of each pixel, often deviate from the actual OER perceived by the human eye. To overcome this problem, in this paper, we propose a novel method for detecting the perceived OER more accurately. Based on the observation that recognizing the OER in an image is dependent on the saturation sensitivity of the human visual system (HVS), we detect the OER by thresholding the saturation value of each pixel. Here, a function of the proposed method, which is designed based on the results of a subjective evaluation on the saturation sensitivity of the HVS, adaptively determines the saturation threshold value using the color and the perceived brightness of each pixel. Experimental results demonstrate that the proposed method accurately detects the perceived OER, and furthermore, the over-exposure correction can be improved by adopting the proposed OER detection method.

  11. Local adaptation and matching habitat choice in female barn owls with respect to melanic coloration.

    PubMed

    Dreiss, A N; Antoniazza, S; Burri, R; Fumagalli, L; Sonnay, C; Frey, C; Goudet, J; Roulin, Alexandre

    2012-01-01

    Local adaptation is a major mechanism underlying the maintenance of phenotypic variation in spatially heterogeneous environments. In the barn owl (Tyto alba), dark and pale reddish-pheomelanic individuals are adapted to conditions prevailing in northern and southern Europe, respectively. Using a long-term dataset from Central Europe, we report results consistent with the hypothesis that the different pheomelanic phenotypes are adapted to specific local conditions in females, but not in males. Compared to whitish females, reddish females bred in sites surrounded by more arable fields and less forests. Colour-dependent habitat choice was apparently beneficial. First, whitish females produced more fledglings when breeding in wooded areas, whereas reddish females when breeding in sites with more arable fields. Second, cross-fostering experiments showed that female nestlings grew wings more rapidly when both their foster and biological mothers were of similar colour. The latter result suggests that mothers should particularly produce daughters in environments that best match their own coloration. Accordingly, whiter females produced fewer daughters in territories with more arable fields. In conclusion, females displaying alternative melanic phenotypes bred in habitats providing them with the highest fitness benefits. Although small in magnitude, matching habitat selection and local adaptation may help maintain variation in pheomelanin coloration in the barn owl.

  12. Color imaging of Mars by the High Resolution Imaging Science Experiment (HiRISE)

    USGS Publications Warehouse

    Delamere, W.A.; Tornabene, L.L.; McEwen, A.S.; Becker, K.; Bergstrom, J.W.; Bridges, N.T.; Eliason, E.M.; Gallagher, D.; Herkenhoff, K. E.; Keszthelyi, L.; Mattson, S.; McArthur, G.K.; Mellon, M.T.; Milazzo, M.; Russell, P.S.; Thomas, N.

    2010-01-01

    HiRISE has been producing a large number of scientifically useful color products of Mars and other planetary objects. The three broad spectral bands, coupled with the highly sensitive 14 bit detectors and time delay integration, enable detection of subtle color differences. The very high spatial resolution of HiRISE can augment the mineralogic interpretations based on multispectral (THEMIS) and hyperspectral datasets (TES, OMEGA and CRISM) and thereby enable detailed geologic and stratigraphic interpretations at meter scales. In addition to providing some examples of color images and their interpretation, we describe the processing techniques used to produce them and note some of the minor artifacts in the output. We also provide an example of how HiRISE color products can be effectively used to expand mineral and lithologic mapping provided by CRISM data products that are backed by other spectral datasets. The utility of high quality color data for understanding geologic processes on Mars has been one of the major successes of HiRISE. ?? 2009 Elsevier Inc.

  13. Radar Image with Color as Height, Lovea, Cambodia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image of Lovea, Cambodia, was acquired by NASA's Airborne Synthetic Aperture Radar (AIRSAR). Lovea, the roughly circular feature in the middle-right of the image, rises some 5 meters (16.4 feet) above the surrounding terrain. Lovea is larger than many of the other mound sites with a diameter of greater than 300 meters (984.3 feet). However, it is one of a number highlighted by the radar imagery. The present-day village of Lovea does not occupy all of the elevated area. However, at the center of the mound is an ancient spirit post honoring the legendary founder of the village. The mound is surrounded by earthworks and has vestiges of additional curvilinear features. Today, as in the past, these harnessed water during the rainy season, and conserved it during the long dry months of the year.

    The village of Lovea located on the mound was established in pre-Khmer times, probably before 500 A.D. In the lower left portion of the image is a large trapeng and square moat. These are good examples of construction during the historical 9th to 14th Century A.D. Khmer period; construction that honored and protected earlier circular villages. This suggests a cultural and technical continuity between prehistoric circular villages and the immense urban site of Angkor. This connection is one of the significant finds generated by NASA's radar imaging of Angkor. It shows that the city of Angkor was a particularly Khmer construction. The temple forms and water management structures of Angkor were the result of pre-existing Khmer beliefs and methods of water management.

    Image dimensions are approximately 6.3 by 4.7 kilometers (3.9 by 2.9 miles). North is at top. Image brightness is from the C-band (5.6 centimeters, or 2.2 inches wavelength) radar backscatter, which is a measure of how much energy the surface reflects back toward the radar. Color is used to represent elevation contours. One cycle of color represents 20 meters (65.6 feet) of elevation change; that is, going

  14. A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution.

    PubMed

    Khan, Adnan Mujahid; Rajpoot, Nasir; Treanor, Darren; Magee, Derek

    2014-06-01

    Histopathology diagnosis is based on visual examination of the morphology of histological sections under a microscope. With the increasing popularity of digital slide scanners, decision support systems based on the analysis of digital pathology images are in high demand. However, computerized decision support systems are fraught with problems that stem from color variations in tissue appearance due to variation in tissue preparation, variation in stain reactivity from different manufacturers/batches, user or protocol variation, and the use of scanners from different manufacturers. In this paper, we present a novel approach to stain normalization in histopathology images. The method is based on nonlinear mapping of a source image to a target image using a representation derived from color deconvolution. Color deconvolution is a method to obtain stain concentration values when the stain matrix, describing how the color is affected by the stain concentration, is given. Rather than relying on standard stain matrices, which may be inappropriate for a given image, we propose the use of a color-based classifier that incorporates a novel stain color descriptor to calculate image-specific stain matrix. In order to demonstrate the efficacy of the proposed stain matrix estimation and stain normalization methods, they are applied to the problem of tumor segmentation in breast histopathology images. The experimental results suggest that the paradigm of color normalization, as a preprocessing step, can significantly help histological image analysis algorithms to demonstrate stable performance which is insensitive to imaging conditions in general and scanner variations in particular.

  15. Color Doppler imaging of the retrobulbar vessels in diabetic retinopathy.

    PubMed

    Pauk-Domańska, Magdalena; Walasik-Szemplińska, Dorota

    2014-03-01

    Diabetes is a metabolic disease characterized by elevated blood glucose level due to impaired insulin secretion and activity. Chronic hyperglycemia leads to functional disorders of numerous organs and to their damage. Vascular lesions belong to the most common late complications of diabetes. Microangiopathic lesions can be found in the eyeball, kidneys and nervous system. Macroangiopathy is associated with coronary and peripheral vessels. Diabetic retinopathy is the most common microangiopathic complication characterized by closure of slight retinal blood vessels and their permeability. Despite intensive research, the pathomechanism that leads to the development and progression of diabetic retinopathy is not fully understood. The examinations used in assessing diabetic retinopathy usually involve imaging of the vessels in the eyeball and the retina. Therefore, the examinations include: fluorescein angiography, optical coherence tomography of the retina, B-mode ultrasound imaging, perimetry and digital retinal photography. There are many papers that discuss the correlations between retrobulbar circulation alterations and progression of diabetic retinopathy based on Doppler sonography. Color Doppler imaging is a non-invasive method enabling measurements of blood flow velocities in small vessels of the eyeball. The most frequently assessed vessels include: the ophthalmic artery, which is the first branch of the internal carotid artery, as well as the central retinal vein and artery, and the posterior ciliary arteries. The analysis of hemodynamic alterations in the retrobulbar vessels may deliver important information concerning circulation in diabetes and help to answer the question whether there is a relation between the progression of diabetic retinopathy and the changes observed in blood flow in the vessels of the eyeball. This paper presents the overview of literature regarding studies on blood flow in the vessels of the eyeball in patients with diabetic

  16. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  17. A location method for vehicle license plate based on color image and black-white texture

    NASA Astrophysics Data System (ADS)

    Li, Gang; Liu, Chang; He, Mingquan; Huang, Xiyue

    2007-12-01

    This paper presents an effective location algorithm employing color features and black-white texture analysis of the image to extract a vehicle license plate from a complicated background image. According to the background color of the license plate in RGB spaces, we transform the RGB image into a grayscale image as strengthening the color of the license plate, and partition the intensity image to obtain a binary image which can outstand the license plate part. We leach away the color which is similar to the license plate by analyzing the texture characteristic. The test shows that this location method can hardly be influenced by all of the factors including illumination, license plate position, license plate size, license plate angle, car position, image background and so on. Meanwhile, it can gain a high speed, better effects and a wide application area.

  18. Biological versus electronic adaptive coloration: how can one inform the other?

    PubMed

    Kreit, Eric; Mäthger, Lydia M; Hanlon, Roger T; Dennis, Patrick B; Naik, Rajesh R; Forsythe, Eric; Heikenfeld, Jason

    2013-01-06

    Adaptive reflective surfaces have been a challenge for both electronic paper (e-paper) and biological organisms. Multiple colours, contrast, polarization, reflectance, diffusivity and texture must all be controlled simultaneously without optical losses in order to fully replicate the appearance of natural surfaces and vividly communicate information. This review merges the frontiers of knowledge for both biological adaptive coloration, with a focus on cephalopods, and synthetic reflective e-paper within a consistent framework of scientific metrics. Currently, the highest performance approach for both nature and technology uses colourant transposition. Three outcomes are envisioned from this review: reflective display engineers may gain new insights from millions of years of natural selection and evolution; biologists will benefit from understanding the types of mechanisms, characterization and metrics used in synthetic reflective e-paper; all scientists will gain a clearer picture of the long-term prospects for capabilities such as adaptive concealment and signaling.

  19. Biological versus electronic adaptive coloration: how can one inform the other?

    PubMed Central

    Kreit, Eric; Mäthger, Lydia M.; Hanlon, Roger T.; Dennis, Patrick B.; Naik, Rajesh R.; Forsythe, Eric; Heikenfeld, Jason

    2013-01-01

    Adaptive reflective surfaces have been a challenge for both electronic paper (e-paper) and biological organisms. Multiple colours, contrast, polarization, reflectance, diffusivity and texture must all be controlled simultaneously without optical losses in order to fully replicate the appearance of natural surfaces and vividly communicate information. This review merges the frontiers of knowledge for both biological adaptive coloration, with a focus on cephalopods, and synthetic reflective e-paper within a consistent framework of scientific metrics. Currently, the highest performance approach for both nature and technology uses colourant transposition. Three outcomes are envisioned from this review: reflective display engineers may gain new insights from millions of years of natural selection and evolution; biologists will benefit from understanding the types of mechanisms, characterization and metrics used in synthetic reflective e-paper; all scientists will gain a clearer picture of the long-term prospects for capabilities such as adaptive concealment and signalling. PMID:23015522

  20. The Non-linear Logarithm Method (NLLM) to adjust the color deviation of fluorescent images

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Ju; Chang, Han-Chao; Huang, Kuo-Cheng; Chang, Chung-Hsing

    2013-06-01

    Fluorescence objects can be excited by ultraviolet (UV) light and emit a specific light of longer wavelength in biomedical experiments. However, UV light causes a deviation in the blue violet color of fluorescent images. Therefore, this study presents a color deviation adjustment method to recover the color of fluorescent image to the hue observed under normal white light, while retaining the UV light-excited fluorescent area in the reconstructed image. Based on the Gray World Method, we proposed a non-linear logarithm method (NLLM) to restore the color deviation of fluorescent images by using a yellow filter attached to the front of a digital camera lens in the experiment. Subsequently, the luminance datum of objects can be divided into the red, green, and blue (R/G/B) components which can determine the appropriate intensity of chromatic colors. In general, the datum of fluorescent images transformed into the CIE 1931 color space can be used to evaluate the quality of reconstructed images by the distribution of x-y coordinates. From the experiment, the proposed method NLLM can recover more than 90% color deviation and the reconstructed images can approach to the real color of fluorescent object illuminated by white light.

  1. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators.

    PubMed

    Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T

    2011-05-31

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision.

  2. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators

    PubMed Central

    Chiao, Chuan-Chin; Wickiser, J. Kenneth; Allen, Justine J.; Genter, Brock; Hanlon, Roger T.

    2011-01-01

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision. PMID:21576487

  3. Plasmonics-Based Multifunctional Electrodes for Low-Power-Consumption Compact Color-Image Sensors.

    PubMed

    Lin, Keng-Te; Chen, Hsuen-Li; Lai, Yu-Sheng; Chi, Yi-Min; Chu, Ting-Wei

    2016-03-01

    High pixel density, efficient color splitting, a compact structure, superior quantum efficiency, and low power consumption are all important features for contemporary color-image sensors. In this study, we developed a surface plasmonics-based color-image sensor displaying a high photoelectric response, a microlens-free structure, and a zero-bias working voltage. Our compact sensor comprised only (i) a multifunctional electrode based on a single-layer structured aluminum (Al) film and (ii) an underlying silicon (Si) substrate. This approach significantly simplifies the device structure and fabrication processes; for example, the red, green, and blue color pixels can be prepared simultaneously in a single lithography step. Moreover, such Schottky-based plasmonic electrodes perform multiple functions, including color splitting, optical-to-electrical signal conversion, and photogenerated carrier collection for color-image detection. Our multifunctional, electrode-based device could also avoid the interference phenomenon that degrades the color-splitting spectra found in conventional color-image sensors. Furthermore, the device took advantage of the near-field surface plasmonic effect around the Al-Si junction to enhance the optical absorption of Si, resulting in a significant photoelectric current output even under low-light surroundings and zero bias voltage. These plasmonic Schottky-based color-image devices could convert a photocurrent directly into a photovoltage and provided sufficient voltage output for color-image detection even under a light intensity of only several femtowatts per square micrometer. Unlike conventional color image devices, using voltage as the output signal decreases the area of the periphery read-out circuit because it does not require a current-to-voltage conversion capacitor or its related circuit. Therefore, this strategy has great potential for direct integration with complementary metal-oxide-semiconductor (CMOS)-compatible circuit

  4. Color Image of Death Valley, California from SIR-C

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This radar image shows the area of Death Valley, California and the different surface types in the area. Radar is sensitive to surface roughness with rough areas showing up brighter than smooth areas, which appear dark. This is seen in the contrast between the bright mountains that surround the dark, smooth basins and valleys of Death Valley. The image shows Furnace Creek alluvial fan (green crescent feature) at the far right, and the sand dunes near Stove Pipe Wells at the center. Alluvial fans are gravel deposits that wash down from the mountains over time. Several other alluvial fans (semicircular features) can be seen along the mountain fronts in this image. The dark wrench-shaped feature between Furnace Creek fan and the dunes is a smooth flood-plain which encloses Cottonball Basin. Elevations in the valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using these radar data to help answer a number of different questions about Earth's geology including how alluvial fans form and change through time in response to climatic changes and earthquakes. The image is centered at 36.629 degrees north latitude, 117.069 degrees west longitude. Colors in the image represent different radar channels as follows: red =L-band horizontally polarized transmitted, horizontally polarized received (LHH); green =L-band horizontally transmitted, vertically received (LHV) and blue = CHV.

    SIR-C/X-SAR is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground

  5. A novel color image encryption scheme using alternate chaotic mapping structure

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang

    2016-07-01

    This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.

  6. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds.

    PubMed

    Hillis, James M; Brainard, David H

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  7. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  8. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  9. Sparse Representation-Based Image Quality Index With Adaptive Sub-Dictionaries.

    PubMed

    Li, Leida; Cai, Hao; Zhang, Yabin; Lin, Weisi; Kot, Alex C; Sun, Xingming

    2016-08-01

    Distortions cause structural changes in digital images, leading to degraded visual quality. Dictionary-based sparse representation has been widely studied recently due to its ability to extract inherent image structures. Meantime, it can extract image features with slightly higher level semantics. Intuitively, sparse representation can be used for image quality assessment, because visible distortions can cause significant changes to the sparse features. In this paper, a new sparse representation-based image quality assessment model is proposed based on the construction of adaptive sub-dictionaries. An overcomplete dictionary trained from natural images is employed to capture the structure changes between the reference and distorted images by sparse feature extraction via adaptive sub-dictionary selection. Based on the observation that image sparse features are invariant to weak degradations and the perceived image quality is generally influenced by diverse issues, three auxiliary quality features are added, including gradient, color, and luminance information. The proposed method is not sensitive to training images, so a universal dictionary can be adopted for quality evaluation. Extensive experiments on five public image quality databases demonstrate that the proposed method produces the state-of-the-art results, and it delivers consistently well performances when tested in different image quality databases.

  10. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  11. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  12. Probing the functions of contextual modulation by adapting images rather than observers

    PubMed Central

    Webster, Michael A.

    2014-01-01

    Countless visual aftereffects have illustrated how visual sensitivity and perception can be biased by adaptation to the recent temporal context. This contextual modulation has been proposed to serve a variety of functions, but the actual benefits of adaptation remain uncertain. We describe an approach we have recently developed for exploring these benefits by adapting images instead of observers, to simulate how images should appear under theoretically optimal states of adaptation. This allows the long-term consequences of adaptation to be evaluated in ways that are difficult to probe by adapting observers, and provides a common framework for understanding how visual coding changes when the environment or the observer changes, or for evaluating how the effects of temporal context depend on different models of visual coding or the adaptation processes. The approach is illustrated for the specific case of adaptation to color, for which the initial neural coding and adaptation processes are relatively well understood, but can in principle be applied to examine the consequences of adaptation for any stimulus dimension. A simple calibration that adjusts each neuron’s sensitivity according to the stimulus level it is exposed to is sufficient to normalize visual coding and generate a host of benefits, from increased efficiency to perceptual constancy to enhanced discrimination. This temporal normalization may also provide an important precursor for the effective operation of contextual mechanisms operating across space or feature dimensions. To the extent that the effects of adaptation can be predicted, images from new environments could be “pre-adapted” to match them to the observer, eliminating the need for observers to adapt. PMID:25281412

  13. Adaptive MOEMS mirrors for medical imaging

    NASA Astrophysics Data System (ADS)

    Fayek, Reda; Ibrahim, Hany

    2007-03-01

    This paper presents micro-electro-mechanical-systems (MEMS) optical elements with high angular deflection arranged in arrays to perform dynamic laser beam focusing and scanning. Each element selectively addresses a portion of the laser beam. These devices are useful in medical and research applications including laser-scanning microscopy, confocal microscopes, and laser capture micro-dissection. Such laser-based imaging and diagnostic instruments involve complex laser beam manipulations. These often require compound lenses and mirrors that introduce misalignment, attenuation, distortion and light scatter. Instead of using expensive spherical and aspherical lenses and/or mirrors for sophisticated laser beam manipulations, we propose scalable adaptive micro-opto-electro-mechanical-systems (MOEMS) arrays to recapture optical performance and compensate for aberrations, distortions and imperfections introduced by inexpensive optics. A high-density array of small, individually addressable, MOEMS elements is similar to a Fresnel mirror. A scalable 2D array of micro-mirrors approximates spherical or arbitrary surface mirrors of different apertures. A proof of concept prototype was built using PolyMUMP TM due to its reliability, low cost and limited post processing requirements. Low-density arrays (2x2 arrays of square elements, 250x250μm each) were designed, fabricated, and tested. Electrostatic comb fingers actuate the edges of the square mirrors with a low actuation voltage of 20 V - 50 V. CoventorWare TM was used for the design, 3D modeling and motion simulations. Initial results are encouraging. The array is adaptive, configurable and scalable with low actuation voltage and a large tuning range. Individual element addressability would allow versatile uses. Future research will increase deflection angles and maximize reflective area.

  14. Color-dependent motion illusions in stationary images and their phenomenal dimorphism.

    PubMed

    Kitaoka, Akiyoshi

    2014-01-01

    The color-dependent motion illusion in stationary images--a special type of the Fraser-Wilcox illusion--is introduced and discussed. The direction of illusory motion changes depending on whether the image is of high or low luminance and whether the room is bright or dark. This dimorphism of illusion was confirmed by surveys. It is suggested that two different spatial arrangements of color can produce the motion illusion. One is a spatial arrangement where long- and short-wavelength color regions sandwich a darker strip; the other is where the same color regions sandwich a brighter strip.

  15. An effective image classification method with the fusion of invariant feature and a new color descriptor

    NASA Astrophysics Data System (ADS)

    Mansourian, Leila; Taufik Abdullah, Muhamad; Nurliyana Abdullah, Lili; Azman, Azreen; Mustaffa, Mas Rina

    2017-02-01

    Pyramid Histogram of Words (PHOW), combined Bag of Visual Words (BoVW) with the spatial pyramid matching (SPM) in order to add location information to extracted features. However, different PHOW extracted from various color spaces, and they did not extract color information individually, that means they discard color information, which is an important characteristic of any image that is motivated by human vision. This article, concatenated PHOW Multi-Scale Dense Scale Invariant Feature Transform (MSDSIFT) histogram and a proposed Color histogram to improve the performance of existing image classification algorithms. Performance evaluation on several datasets proves that the new approach outperforms other existing, state-of-the-art methods.

  16. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  17. Private anonymous fingerprinting for color images in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Abdul, W.; Gaborit, P.; Carré, P.

    2010-01-01

    An online buyer of multimedia content does not want to reveal his identity or his choice of multimedia content whereas the seller or owner of the content does not want the buyer to further distribute the content illegally. To address these issues we present a new private anonymous fingerprinting protocol. It is based on superposed sending for communication security, group signature for anonymity and traceability and single database private information retrieval (PIR) to allow the user to get an element of the database without giving any information about the acquired element. In the presence of a semi-honest model, the protocol is implemented using a blind, wavelet based color image watermarking scheme. The main advantage of the proposed protocol is that both the user identity and the acquired database element are unknown to any third party and in the case of piracy, the pirate can be identified using the group signature scheme. The robustness of the watermarking scheme against Additive White Gaussian Noise is also shown.

  18. Application of the airborne ocean color imager for commercial fishing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.

    1993-01-01

    The objective of the investigation was to develop a commercial remote sensing system for providing near-real-time data (within one day) in support of commercial fishing operations. The Airborne Ocean Color Imager (AOCI) had been built for NASA by Daedalus Enterprises, Inc., but it needed certain improvements, data processing software, and a delivery system to make it into a commercial system for fisheries. Two products were developed to support this effort: the AOCI with its associated processing system and an information service for both commercial and recreational fisheries to be created by Spectro Scan, Inc. The investigation achieved all technical objectives: improving the AOCI, creating software for atmospheric correction and bio-optical output products, georeferencing the output products, and creating a delivery system to get those products into the hands of commercial and recreational fishermen in near-real-time. The first set of business objectives involved Daedalus Enterprises and also were achieved: they have an improved AOCI and new data processing software with a set of example data products for fisheries applications to show their customers. Daedalus' marketing activities showed the need for simplification of the product for fisheries, but they successfully marketed the current version to an Italian consortium. The second set of business objectives tasked Spectro Scan to provide an information service and they could not be achieved because Spectro Scan was unable to obtain necessary venture capital to start up operations.

  19. Voyager 2 Color Image of Enceladus, Almost Full Disk

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This color Voyager 2 image mosaic shows the water-ice-covered surface of Enceladus, one of Saturn's icy moons. Enceladus' diameter of just 500 km would fit across the state of Arizona, yet despite its small size Enceladus exhibits one of the most interesting surfaces of all the icy satellites. Enceladus reflects about 90% of the incident sunlight (about like fresh-fallen snow), placing it among the most reflective objects in the Solar System. Several geologic terrains have superposed crater densities that span a factor of at least 500, thereby indicating huge differences in the ages of these terrains. It is possible that the high reflectivity of Enceladus' surface results from continuous deposition of icy particles from Saturn's E-ring, which in fact may originate from icy volcanoes on Enceladus' surface. Some terrains are dominated by sinuous mountain ridges from 1 to 2 km high (3300 to 6600 feet), whereas other terrains are scarred by linear cracks, some of which show evidence for possible sideways fault motion such as that of California's infamous San Andreas fault. Some terrains appear to have formed by separation of icy plates along cracks, and other terrains are exceedingly smooth at the resolution of this image. The implication carried by Enceladus' surface is that this tiny ice ball has been geologically active and perhaps partially liquid in its interior for much of its history. The heat engine that powers geologic activity here is thought to be elastic deformation caused by tides induced by Enceladus' orbital motion around Saturn and the motion of another moon, Dione.

  20. Mars Color Imager (MARCI) on the Mars Climate Orbiter

    USGS Publications Warehouse

    Malin, M.C.; Bell, J.F.; Calvin, W.; Clancy, R.T.; Haberle, R.M.; James, P.B.; Lee, S.W.; Thomas, P.C.; Caplinger, M.A.

    2001-01-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (???6 x 6 x 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 1/3 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6?? FOV, which covers ???40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals. Copyright 2001 by the American

  1. Development of wavelength-changeable multiband color-mixing imaging device and its application

    NASA Astrophysics Data System (ADS)

    Ding, Fujian; Chen, Yud-Ren; Chao, Kaunglin; Chan, Diane E.

    2007-02-01

    Previously, we showed that two- and three-band color-mixing techniques could be used to achieve results optically equivalent to two- and three-band ratios that are normally implemented using multispectral imaging systems, for enhancing identification of single target types against a background and for separation of multiple targets by color or contrast. In this paper, a prototype of a wavelength-changeable two- and three-band color-mixing device is presented and its application is demonstrated. The wavelength-changeable device uses changeable central wavelength bandpass filters and various filter arrangements. The experiments showed that a color-mixing technique implemented in a pair of binoculars coupled with an imager could greatly enhance target identification of color-blindness test cards with hidden numbers and figures as the targets. Target identification of color blindness cards was greatly improved by using twoband color-mixing with filters at 620 nm and 650 nm, which were selected based on the criterion of uniform background. Target identification of a different set of color blindness test cards was also improved using three-band color-mixing with filters at 450 nm, 520 nm, and 632 nm, which were selected based on the criterion of maximum chromaticness difference. These experiments show that color-mixing techniques can significantly enhance electronic imaging and visual inspection.

  2. Iterative color constancy with temporal filtering for an image sequence with no relative motion between the camera and the scene.

    PubMed

    Simão, Josemar; Jörg Andreas Schneebeli, Hans; Vassallo, Raquel Frizera

    2015-11-01

    Color constancy is the ability to perceive the color of a surface as invariant even under changing illumination. In outdoor applications, such as mobile robot navigation or surveillance, the lack of this ability harms the segmentation, tracking, and object recognition tasks. The main approaches for color constancy are generally targeted to static images and intend to estimate the scene illuminant color from the images. We present an iterative color constancy method with temporal filtering applied to image sequences in which reference colors are estimated from previous corrected images. Furthermore, two strategies to sample colors from the images are tested. The proposed method has been tested using image sequences with no relative movement between the scene and the camera. It also has been compared with known color constancy algorithms such as gray-world, max-RGB, and gray-edge. In most cases, the iterative color constancy method achieved better results than the other approaches.

  3. Clustered impulse noise removal from color images with spatially connected rank filtering

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2016-09-01

    This paper deals with impulse noise removal from color images. The proposed noise removal algorithm employs two classical approaches for color image denoising; that is, detection of corrupted pixels and removal of the detected noise by means of local rank filtering. With the help of computer simulation we show that the proposed algorithm can effectively remove impulse noise and clustered impulse noise. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  4. Artificial frame filling using adaptive neural fuzzy inference system for particle image velocimetry dataset

    NASA Astrophysics Data System (ADS)

    Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer

    2015-03-01

    Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.

  5. Analyzing visual enjoyment of color: using female nude digital Image as example

    NASA Astrophysics Data System (ADS)

    Chin, Sin-Ho

    2014-04-01

    This research adopts three primary colors and their three mixed colors as main color hue variances by changing the background of a female nude digital image. The color saturation variation is selected to 9S as high saturation and 3S as low saturation of PCCS. And the color tone elements are adopted in 3.5 as low brightness, 5.5 as medium brightness for primary color, and 7.5 as low brightness. The water-color brush stroke used for two female body digital images which consisting of a visual pleasant image with elegant posture and another unpleasant image with stiff body language, is to add the visual intimacy. Results show the brightness of color is the main factor impacting visual enjoyment, followed by saturation. Explicitly, high-brightness with high saturation gains the highest rate of enjoyment, high-saturation medium brightness (primary color) the second, and high-brightness with low saturation the third, and low-brightness with low saturation the least.

  6. An Adaptive Framework for Image and Video Sensing

    DTIC Science & Technology

    2005-03-01

    bandwidth on the camera transmission or memory is not optimally utilized. In this paper we outline a framework for an adaptive sensor where the spatial and...scene can be realized, with small distortion. Keywords: Adaptive Imaging, Varying Sampling Rate, Image Content Measure, Scene Adaptive, Camera ...second order effect on the spatio-temporal trade-off. Figure 1 is an example of the spatio-temporal sampling rate tradeoff in a typical camera (e.g

  7. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  8. Segmentation and classification of burn images by color and texture information.

    PubMed

    Acha, Begoña; Serrano, Carmen; Acha, José I; Roa, Laura M

    2005-01-01

    In this paper, a burn color image segmentation and classification system is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths). Digital color photographs are used as inputs to the system. The system is based on color and texture information, since these are the characteristics observed by physicians in order to form a diagnosis. A perceptually uniform color space (L*u*v*) was used, since Euclidean distances calculated in this space correspond to perceptual color differences. After the burn is segmented, a set of color and texture features is calculated that serves as the input to a Fuzzy-ARTMAP neural network. The neural network classifies burns into three types of burn depths: superficial dermal, deep dermal, and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images, yielding an average classification success rate of 82%.

  9. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2005-01-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  10. Development of an image capturing system for the reproduction of high-fidelity color

    NASA Astrophysics Data System (ADS)

    Ejaz, Tahseen; Shoichi, Yokoi; Horiuchi, Tomohiro; Yokota, Tetsuya; Takaya, Masanori; Ohashi, Gosuke; Shimodaira, Yoshifumi

    2004-12-01

    An image capturing system for the reproduction of high-fidelity color color was developed and a set of three optical filters were designed for this purpose. Simulation was performed on the SOCS database containing the spectral reflectance data of various objects in the range of wavelength of 400nm ~ 700nm in order to calculate the CIELAB color difference ΔEab. The average color difference was found to be 1.049. The camera was mounted with the filters and color photographs of all the 24 color patches of the Macbeth chart were taken. The measured tristimulus values of the patches were compared with those of the digital images captured by the camera. The average ΔEab was found to be 5.916.

  11. Color image authentication scheme via multispectral photon-counting double random phase encoding

    NASA Astrophysics Data System (ADS)

    Moon, Inkyu

    2015-05-01

    In this paper, we present an overview of a color image authentication scheme via multispectral photon-counting (MPCI) double random phase encoding (DRPE). The MPCI makes image sparse distributed and DRPE lets image be stationary white noise which make intruder attacks difficult. In this method, the original RGB image is down-sampled into Bayer image and then be encrypted with DRPE. The encrypted image is photon-counted and transmitted on internet channel. For image authentication, the decrypted Bayer image is interpolated into RBC image with demosaicing algorithm. Experimental results show that the decrypted image is not visually recognized under low light level but can be verified with nonlinear correlation algorithm.

  12. Color images of Kansas subsurface geology from well logs

    USGS Publications Warehouse

    Collins, D.R.; Doveton, J.H.

    1986-01-01

    Modern wireline log combinations give highly diagnostic information that goes beyond the basic shale content, pore volume, and fluid saturation of older logs. Pattern recognition of geology from logs is made conventionally through either the examination of log overlays or log crossplots. Both methods can be combined through the use of color as a medium of information by setting the three color primaries of blue, green, and red light as axes of three dimensional color space. Multiple log readings of zones are rendered as composite color mixtures which, when plotted sequentially with depth, show lithological successions in a striking manner. The method is extremely simple to program and display on a color monitor. Illustrative examples are described from the Kansas subsurface. ?? 1986.

  13. Multiple color-image fusion and watermarking based on optical interference and wavelet transform

    NASA Astrophysics Data System (ADS)

    Abuturab, Muhammad Rafiq

    2017-02-01

    A novel multiple color-image fusion and watermarking using optical interference and wavelet transform is proposed. In this method, each secret color image is encoded into three phase-only masks (POMs). One POM is constructed as user identity key and the other two POMs are generated as user identity key modulated by corresponding secret color image in gyrator transform domain without using any time-consuming iterative computations or post-processing of the POMs to remove inherent silhouette problem. The R, G, and B channels of different user identity keys POM are then individually multiplied to get three multiplex POMs, which are exploited as encrypted images. Similarly the R, G, and B channels of other two POMs are independently multiplied to obtain two sets of three multiplex POMs. The encrypted images are fused with gray-level cover image to produce the final encrypted image as watermarked image. The secret color images are shielded by encrypted images (which have no information about secret images) as well as cover image (which reveals no information about encrypted images). These two remarkable features of the proposed system drastically reduce the probability of the encrypted images to be searched and attacked. Each individual user has an identity key and two phase-only keys as three decryption keys besides transformation angles regarded as additional keys. Theoretical analysis and numerical simulation results validate the feasibility of the proposed method.

  14. Color image authentication using a zone-corrected error-monitoring quantization-based watermarking technique

    NASA Astrophysics Data System (ADS)

    Al-Otum, Hazem Munawer

    2016-08-01

    This article presents a semifragile color image watermarking technique for content authentication. The proposed technique can be implemented with color images and embeds a watermarking sequence into the low-frequency coefficients of the approximation, horizontal, and vertical sub-bands of a modified two-leveled discrete wavelet transform. This is obtained by inserting a predefined value, collected from two of the three R, G, and B color layers, into the third color layer. This gives an ability to monitor modifications by observing the changes occurring in the color layer where the watermark is embedded. Here, two measures were developed to check the technique copyright and authentication performances. Experimental results have shown a high accuracy in detecting and localizing intentional attacks while exhibiting a high robustness against common image processing attacks.

  15. Rapid production of structural color images with optical data storage capabilities

    NASA Astrophysics Data System (ADS)

    Rezaei, Mohamad; Jiang, Hao; Qarehbaghi, Reza; Naghshineh, Mohammad; Kaminska, Bozena

    2015-03-01

    In this paper, we present novel methods to produce structural color image for any given color picture using a pixelated generic stamp named nanosubstrate. The nanosubstrate is composed of prefabricated arrays of red, green and blue subpixels. Each subpixel has nano-gratings and/or sub-wavelength structures which give structural colors through light diffraction. Micro-patterning techniques were implemented to produce the color images from the nanosubstrate by selective activation of subpixels. The nano-grating structures can be nanohole arrays, which after replication are converted to nanopillar arrays or vice versa. It has been demonstrated that visible and invisible data can be easily stored using these fabrication methods and the information can be easily read. Therefore the techniques can be employed to produce personalized and customized color images for applications in optical document security and publicity, and can also be complemented by combined optical data storage capabilities.

  16. True color blood flow imaging using a high-speed laser photography system

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Lin, Cheng-Hsien; Sun, Yung-Nien; Ho, Chung-Liang; Hsu, Chung-Chi

    2012-10-01

    Physiological changes in the retinal vasculature are commonly indicative of such disorders as diabetic retinopathy, glaucoma, and age-related macular degeneration. Thus, various methods have been developed for noninvasive clinical evaluation of ocular hemodynamics. However, to the best of our knowledge, current ophthalmic instruments do not provide a true color blood flow imaging capability. Accordingly, we propose a new method for the true color imaging of blood flow using a high-speed pulsed laser photography system. In the proposed approach, monochromatic images of the blood flow are acquired using a system of three cameras and three color lasers (red, green, and blue). A high-quality true color image of the blood flow is obtained by assembling the monochromatic images by means of image realignment and color calibration processes. The effectiveness of the proposed approach is demonstrated by imaging the flow of mouse blood within a microfluidic channel device. The experimental results confirm the proposed system provides a high-quality true color blood flow imaging capability, and therefore has potential for noninvasive clinical evaluation of ocular hemodynamics.

  17. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    PubMed Central

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A.; Wong, Alexander

    2016-01-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging. PMID:27346434

  18. Joint demosaicking and integer-ratio downsampling algorithm for color filter array image

    NASA Astrophysics Data System (ADS)

    Lee, Sangyoon; Kang, Moon Gi

    2015-03-01

    This paper presents a joint demosacking and integer-ratio downsampling algorithm for color filter array (CFA) images. Color demosaicking is a necessary part of image signal processing to obtain full color image for digital image recording system using single sensor. Also, such as mobile devices, the obtained image from sensor has to be downsampled to be display because the resolution of display is smaller than that of image. The conventional method is "Demosaicking first and downsampling later". However, this procedure requires a significant hardware resources and computational cost. In this paper, we proposed a method in which demosaicking and downsampling are working simultaneously. We analyze the Bayer CFA image in frequency domain, and then joint demosaicking and downsampling with integer-ratio scheme based on signal decomposition of luma and chrominance components. Experimental results show that the proposed method produces the high quality performance with much lower com putational cost and less hardware resources.

  19. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  20. Adaptive feature-specific imaging: a face recognition example.

    PubMed

    Baheti, Pawan K; Neifeld, Mark A

    2008-04-01

    We present an adaptive feature-specific imaging (AFSI) system and consider its application to a face recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. Using sequential hypothesis testing, we compare AFSI with static-FSI (SFSI) and static or adaptive conventional imaging in terms of the number of measurements required to achieve a specified probability of misclassification (Pe). The AFSI system exhibits significant improvement compared to SFSI and conventional imaging at low signal-to-noise ratio (SNR). It is shown that for M=4 hypotheses and desired Pe=10(-2), AFSI requires 100 times fewer measurements than the adaptive conventional imager at SNR= -20 dB. We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage, resulting in an optimal value of integration time (equivalent to SNR) per measurement.

  1. Curvature adaptive optics and low light imaging

    NASA Astrophysics Data System (ADS)

    Ftaclas, C.; Chun, M.; Kuhn, J.; Ritter, J.

    We review the basic approach of curvature adaptive optics (AO) and show how its many advantages arise. A curvature wave front sensor (WFS) measures exactly what a curvature deformable mirror (DM) generates. This leads to the computational and operational simplicity of a nearly diagonal control matrix. The DM automatically reconstructs the wave front based on WFS curvature measurements. Thus, there is no formal wave front reconstruction. This poses an interesting challenge to post-processing of AO images. Physical continuity of the DM and the reconstruction of phase from wave front curvature data assure that each actuated region of the DM corrects local phase, tip-tilt and focus. This gain in per-channel correction efficiency, combined with the need for only one pixel per channel detector reads in the WFS allows the use of photon counting detectors for wave front sensing. We note that the use of photon counting detectors implies penalty-free combination of correction channels either in the WFS or on the DM. This effectively decouples bright and faint source performance in that one no longer predicts the other. The application of curvature AO to the low light moving target detection problem, and explore the resulting challenges to components and control systems. Rapidly moving targets impose high-speed operation posing new requirements unique to curvature components. On the plus side, curvature wave front sensors, unlike their Shack-Hartmann counterparts, are tunable for optimum sensitivity to seeing and we are examining autonomous optimization of the WFS to respond to rapid changes in seeing.

  2. Empirical comparison of color normalization methods for epithelial-stromal classification in H and E images

    PubMed Central

    Sethi, Amit; Sha, Lingdao; Vahadane, Abhishek Ramnath; Deaton, Ryan J.; Kumar, Neeraj; Macias, Virgilia; Gann, Peter H.

    2016-01-01

    Context: Color normalization techniques for histology have not been empirically tested for their utility for computational pathology pipelines. Aims: We compared two contemporary techniques for achieving a common intermediate goal – epithelial-stromal classification. Settings and Design: Expert-annotated regions of epithelium and stroma were treated as ground truth for comparing classifiers on original and color-normalized images. Materials and Methods: Epithelial and stromal regions were annotated on thirty diverse-appearing H and E stained prostate cancer tissue microarray cores. Corresponding sets of thirty images each were generated using the two color normalization techniques. Color metrics were compared for original and color-normalized images. Separate epithelial-stromal classifiers were trained and compared on test images. Main analyses were conducted using a multiresolution segmentation (MRS) approach; comparative analyses using two other classification approaches (convolutional neural network [CNN], Wndchrm) were also performed. Statistical Analysis: For the main MRS method, which relied on classification of super-pixels, the number of variables used was reduced using backward elimination without compromising accuracy, and test - area under the curves (AUCs) were compared for original and normalized images. For CNN and Wndchrm, pixel classification test-AUCs were compared. Results: Khan method reduced color saturation while Vahadane reduced hue variance. Super-pixel-level test-AUC for MRS was 0.010–0.025 (95% confidence interval limits ± 0.004) higher for the two normalized image sets compared to the original in the 10–80 variable range. Improvement in pixel classification accuracy was also observed for CNN and Wndchrm for color-normalized images. Conclusions: Color normalization can give a small incremental benefit when a super-pixel-based classification method is used with features that perform implicit color normalization while the gain is

  3. Wide-field computational color imaging using pixel super-resolved on-chip microscopy

    PubMed Central

    Greenbaum, Alon; Feizi, Alborz; Akbari, Najva; Ozcan, Aydogan

    2013-01-01

    Lens-free holographic on-chip imaging is an emerging approach that offers both wide field-of-view (FOV) and high spatial resolution in a cost-effective and compact design using source shifting based pixel super-resolution. However, color imaging has remained relatively immature for lens-free on-chip imaging, since a ‘rainbow’ like color artifact appears in reconstructed holographic images. To provide a solution for pixel super-resolved color imaging on a chip, here we introduce and compare the performances of two computational methods based on (1) YUV color space averaging, and (2) Dijkstra’s shortest path, both of which eliminate color artifacts in reconstructed images, without compromising the spatial resolution or the wide FOV of lens-free on-chip microscopes. To demonstrate the potential of this lens-free color microscope we imaged stained Papanicolaou (Pap) smears over a wide FOV of ~14 mm2 with sub-micron spatial resolution. PMID:23736466

  4. An adaptive filter for smoothing noisy radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Stiles, J. A.; Shanmugam, K. S.; Holtzman, J. C.; Smith, S. A.

    1981-01-01

    A spatial domain adaptive Wiener filter for smoothing radar images corrupted by multiplicative noise is presented. The filter is optimum in a minimum mean squared error sense, computationally efficient, and preserves edges in the image better than other filters. The proposed algorithm can also be used for processing optical images with illumination variations that have a multiplicative effect.

  5. False-Color-Image Map of Quadrangle 3364, Pasa-Band (417) and Kejran (418) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  6. False-Color-Image Map of Quadrangle 3362, Shin-Dand (415) and Tulak (416) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  7. False-Color-Image Map of Quadrangle 3166, Jaldak (701) and Maruf-Nawa (702) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  8. False-Color-Image Map of Quadrangle 3264, Nawzad-Musa-Qala (423) and Dehrawat (424) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  9. False-Color-Image Map of Quadrangle 3566, Sang-Charak (501) and Sayghan-O-Kamard (502) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  10. False-Color-Image Map of Quadrangle 3462, Herat (409) and Chesht-Sharif (410) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  11. False-Color-Image Map of Quadrangle 3570, Tagab-E-Munjan (505) and Asmar-Kamdesh (506) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  12. False-Color-Image Map of Quadrangle 3466, Lal-Sarjangal (507) and Bamyan (508) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  13. False-Color-Image Map of Quadrangle 3670, Jarm-Keshem (223) and Zebak (224) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  14. False-Color-Image Map of Quadrangle 3468, Chak Wardak-Syahgerd (509) and Kabul (510) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  15. False-Color-Image Map of Quadrangle 3262, Farah (421) and Hokumat-E-Pur-Chaman (422) Quadrangles, Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Turner, Kenzie J.

    2007-01-01

    This map is a false-color rendition created from Landsat 7 Enhanced Thematic Mapper Plus imagery collected between 1999 and 2002. The false colors were generated by applying an adaptive histogram equalization stretch to Landsat bands 7 (displayed in red), 4 (displayed in green), and 2 (displayed in blue). These three bands contain most of the spectral differences provided by Landsat imagery and, therefore, provide the most discrimination between surface materials. Landsat bands 4 and 7 are in the near-infrared and short-wave-infrared regions, respectively, where differences in absorption of sunlight by different surface materials are more pronounced than in visible wavelengths. Cultural data were extracted from files downloaded from the Afghanistan Information Management Service (AIMS) Web site (http://www.aims.org.af). The AIMS files were originally derived from maps produced by the Afghanistan Geodesy and Cartography Head Office (AGCHO). Cultural features were not derived from the Landsat base and consequently do not match it precisely. This map is part of a series that includes a geologic map, a topographic map, a Landsat natural-color-image map, and a Landsat false-color-image map for the USGS/AGS (U.S. Geological Survey/Afghan Geological Survey) quadrangles covering Afghanistan. The maps for any given quadrangle have the same open-file report (OFR) number but a different letter suffix, namely, -A, -B, -C, and -D for the geologic, topographic, Landsat natural-color, and Landsat false-color maps, respectively. The OFR numbers range in sequence from 1092 to 1123. The present map series is to be followed by a second series, in which the geology is reinterpreted on the basis of analysis of remote-sensing data, limited fieldwork, and library research. The second series is to be produced by the USGS in cooperation with the AGS and AGCHO.

  16. HST Imaging of the Globular Clusters in the Formax Cluster: Color and Luminosity Distributions

    NASA Technical Reports Server (NTRS)

    Grillmair, C. J.; Forbes, D. A.; Brodie, J.; Elson, R.

    1998-01-01

    We examine the luminosity and B - I color distribution of globular clusters for three early-type galaxies in the Fornax cluster using imaging data from the Wide Field/Planetary Camera 2 on the Hubble Space Telescope.

  17. Color analysis method for estimating the oxygen saturation of hemoglobin using an image-input and processing system.

    PubMed

    Hashimoto, M; Hata, R; Isomoto, A; Tyuma, I; Fukuda, M

    1987-04-01

    A color analysis method which enables both qualitative and quantitative analyses of an object's color was developed. The method uses a color image-input and processing system composed of a 3-tube video camera and a digital image analyzer, which quantizes a color image into values of red, green, and blue brightness, then processes these values. We introduced a spectrophotometric principle by the Beer-Lambert law, and were able to establish a color model to analyze an object's color. In the coordinate space based on our color model, the hue of the object's color is represented by the direction from the origin, and the density by the distance from the origin. This new method was used to analyze the colors of hemoglobin solutions at various oxygen saturations and concentrations. The results agreed with the known conditions, indicating the validity of the model and its usefulness for quantitative as well as qualitative analyses of color.

  18. A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.

    PubMed

    Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H

    2017-01-01

    Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.

  19. A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.

    PubMed

    Chee, Adrian; Yiu, Billy; Yu, Alfred

    2016-09-07

    Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigen-decompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel-SVD) since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths were studied. Using our eigenprocessing framework, real-time video-range throughput (24 fps) can be attained for CFI frames with full-view in azimuth direction (128 scanlines), up to a scan depth of 5 cm (λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.

  20. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  1. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the

  2. Adaptation Aftereffects in the Perception of Radiological Images

    PubMed Central

    Kompaniez, Elysse; Abbey, Craig K.; Boone, John M.; Webster, Michael A.

    2013-01-01

    Radiologists must classify and interpret medical images on the basis of visual inspection. We examined how the perception of radiological scans might be affected by common processes of adaptation in the visual system. Adaptation selectively adjusts sensitivity to the properties of the stimulus in current view, inducing an aftereffect in the appearance of stimuli viewed subsequently. These perceptual changes have been found to affect many visual attributes, but whether they are relevant to medical image perception is not well understood. To examine this we tested whether aftereffects could be generated by the characteristic spatial structure of radiological scans, and whether this could bias their appearance along dimensions that are routinely used to classify them. Measurements were focused on the effects of adaptation to images of normal mammograms, and were tested in observers who were not radiologists. Tissue density in mammograms is evaluated visually and ranges from "dense" to "fatty." Arrays of images varying in intermediate levels between these categories were created by blending dense and fatty images with different weights. Observers first adapted by viewing image samples of dense or fatty tissue, and then judged the appearance of the intermediate images by using a texture matching task. This revealed pronounced perceptual aftereffects – prior exposure to dense images caused an intermediate image to appear more fatty and vice versa. Moreover, the appearance of the adapting images themselves changed with prolonged viewing, so that they became less distinctive as textures. These aftereffects could not be accounted for by the contrast differences or power spectra of the images, and instead tended to follow from the phase spectrum. Our results suggest that observers can selectively adapt to the properties of radiological images, and that this selectivity could strongly impact the perceived textural characteristics of the images. PMID:24146833

  3. [Study of color blood image segmentation based on two-stage-improved FCM algorithm].

    PubMed

    Wang, Bin; Chen, Huaiqing; Huang, Hua; Rao, Jie

    2006-04-01

    This paper introduces a new method for color blood cell image segmentation based on FCM algorithm. By transforming the original blood microscopic image to indexed image, and by doing the colormap, a fuzzy apparoach to obviating the direct clustering of image pixel values, the quantity of data processing and analysis is enormously compressed. In accordance to the inherent features of color blood cell image, the segmentation process is divided into two stages. (1)confirming the number of clusters and initial cluster centers; (2) altering the distance measuring method by the distance weighting matrix in order to improve the clustering veracity. In this way, the problem of difficult convergence of FCM algorithm is solved, the iteration time of iterative convergence is reduced, the execution time of algarithm is decreased, and the correct segmentation of the components of color blood cell image is implemented.

  4. Color image zero-watermarking based on SVD and visual cryptography in DWT domain

    NASA Astrophysics Data System (ADS)

    Liu, Xilin; Chen, Beijing; Coatrieux, Gouenou; Shu, Huazhong

    2017-02-01

    This paper presents a novel robust color image zero-watermarking scheme based on SVD and visual cryptography. We firstly generate the image feature from the SVD of the image blocks, and then employ the visual secret sharing scheme to construct ownership share from the watermark and the image feature. The low frequency component of one level discrete wavelet transform of the color image is partitioned into blocks. Then we propose to use the feature generated from the first singular value of the blocks to construct the master share. When ownership debate occurs, the ownership share is used to extract the watermark. Experimental results show the better performance of the proposed watermarking system in terms of robustness to various attacks, including noise, filtering, JPEG compression and so on, than other visual cryptography based color image watermarking algorithm.

  5. Method of separating scanned maps into arbitrary colorants using filter images and logical operators

    NASA Astrophysics Data System (ADS)

    Fryer, Patrick D.; Johnson, Tony

    1999-12-01

    This paper describes a process for separating a map, originally printed using an unknown ink specification into its component colors before being reprinted using a known ink specification. The methodology is based on two earlier papers by Kanamori and Kotera, (1991) and Harrington (1992) in which the use of logical operators in color central were explored. A detailed analysis of the scanned map identified primary, secondary and transition colors. Filter images containing pixels taken from across the scanned image were developed to describe the variation of color found within each of these color groups. The maximum and minimum values of hue, lightness and chroma were then used to derive logical operators and true/false statements which when applied to L*a*b* pixel arrays separate the scanned map it into its primary color components. This technique was refined to include secondary and transition colors. By combining true/false statements it was possible to separate more specific areas within the scanned map. The method was used to reproduce the map using the known ink specification with a (Delta) E value ranging between 2.1 (Yellow) to 11.9 (Black), for the known and unknown ink specifications. It was also used to change geographic features represented by each color component through the addition and deletion of color detail.

  6. A pixel-based color image segmentation using support vector machine and fuzzy C-means.

    PubMed

    Wang, Xiang-Yang; Zhang, Xian-Jin; Yang, Hong-Ying; Bu, Juan

    2012-09-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. In this paper, we present a pixel-based color image segmentation using Support Vector Machine (SVM) and Fuzzy C-Means (FCM). Firstly, the pixel-level color feature and texture feature of the image, which is used as input of the SVM model (classifier), are extracted via the local spatial similarity measure model and Steerable filter. Then, the SVM model (classifier) is trained by using FCM with the extracted pixel-level features. Finally, the color image is segmented with the trained SVM model (classifier). This image segmentation can not only take full advantage of the local information of the color image but also the ability of the SVM classifier. Experimental evidence shows that the proposed method has a very effective computational behavior and effectiveness, and decreases the time and increases the quality of color image segmentation in comparison with the state-of-the-art segmentation methods recently proposed in the literature.

  7. Seed viability detection using computerized false-color radiographic image enhancement

    NASA Technical Reports Server (NTRS)

    Vozzo, J. A.; Marko, Michael

    1994-01-01

    Seed radiographs are divided into density zones which are related to seed germination. The seeds which germinate have densities relating to false-color red. In turn, a seed sorter may be designed which rejects those seeds not having sufficient red to activate a gate along a moving belt containing the seed source. This results in separating only seeds with the preselected densities representing biological viability lending to germination. These selected seeds demand a higher market value. Actual false-coloring isn't required for a computer to distinguish the significant gray-zone range. This range can be predetermined and screened without the necessity of red imaging. Applying false-color enhancement is a means of emphasizing differences in densities of gray within any subject from photographic, radiographic, or video imaging. Within the 0-255 range of gray levels, colors can be assigned to any single level or group of gray levels. Densitometric values then become easily recognized colors which relate to the image density. Choosing a color to identify any given density allows separation by morphology or composition (form or function). Additionally, relative areas of each color are readily available for determining distribution of that density by comparison with other densities within the image.

  8. Dual-tree complex wavelet transform applied on color descriptors for remote-sensed images retrieval

    NASA Astrophysics Data System (ADS)

    Sebai, Houria; Kourgli, Assia; Serir, Amina

    2015-01-01

    This paper highlights color component features that improve high-resolution satellite (HRS) images retrieval. Color component correlation across image lines and columns is used to define a revised color space. It is designed to simultaneously take both color and neighborhood information. From this space, color descriptors, namely rotation invariant uniform local binary pattern, histogram of gradient, and a modified version of local variance are derived through dual-tree complex wavelet transform (DT-CWT). A new color descriptor called smoothed local variance (SLV) using an edge-preserving smoothing filter is introduced. It is intended to offer an efficient way to represent texture/structure information using an invariant to rotation descriptor. This descriptor takes advantage of DT-CWT representation to enhance the retrieval performance of HRS images. We report an evaluation of the SLV descriptor associated with the new color space using different similarity distances in our content-based image retrieval scheme. We also perform comparison with some standard features. Experimental results show that SLV descriptor allied to DT-CWT representation outperforms the other approaches.

  9. Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector.

    PubMed

    Fu, Chen; Arguello, Henry; Sadler, Brian M; Arce, Gonzalo R

    2015-11-01

    A compressive spectral and polarization imager based on a pixelized polarizer and colored patterned detector is presented. The proposed imager captures several dispersed compressive projections with spectral and polarization coding. Stokes parameter images at several wavelengths are reconstructed directly from 2D projections. Employing a pixelized polarizer and colored patterned detector enables compressive sensing over spatial, spectral, and polarization domains, reducing the total number of measurements. Compressive sensing codes are specially designed to enhance the peak signal-to-noise ratio in the reconstructed images. Experiments validate the architecture and reconstruction algorithms.

  10. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    PubMed

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise.

  11. A simple and efficient algorithm for connected component labeling in color images

    NASA Astrophysics Data System (ADS)

    Celebi, M. Emre

    2012-03-01

    Connected component labeling is a fundamental operation in binary image processing. A plethora of algorithms have been proposed for this low-level operation with the early ones dating back to the 1960s. However, very few of these algorithms were designed to handle color images. In this paper, we present a simple algorithm for labeling connected components in color images using an approximately linear-time seed fill algorithm. Experiments on a large set of photographic and synthetic images demonstrate that the proposed algorithm provides fast and accurate labeling without requiring excessive stack space.

  12. A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping

    NASA Astrophysics Data System (ADS)

    Saad, Ashraf A.; Shapiro, Linda G.

    2008-03-01

    Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.

  13. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  14. High fidelity adaptive vector quantization at very low bit rates for progressive transmission of radiographic images

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Yang, Shu Y.

    1999-01-01

    An adaptive vector quantizer (VQ) using a clustering technique known as adaptive fuzzy leader clustering (AFLC) that is similar in concept to deterministic annealing for VQ codebook design has been developed. This vector quantizer, AFLC-VQ, has been designed to vector quantize wavelet decomposed sub images with optimal bit allocation. The high- resolution sub images at each level have been statistically analyzed to conform to generalized Gaussian probability distributions by selecting the optimal number of filter taps. The adaptive characteristics of AFLC-VQ result from AFLC, an algorithm that uses self-organizing neural networks with fuzzy membership values of the input samples for upgrading the cluster centroids based on well known optimization criteria. By generating codebooks containing codewords of varying bits, AFLC-VQ is capable of compressing large color/monochrome medical images at extremely low bit rates (0.1 bpp and less) and yet yielding high fidelity reconstructed images. The quality of the reconstructed images formed by AFLC-VQ has been compared with JPEG and EZW, the standard and the well known wavelet based compression technique (using scalar quantization), respectively, in terms of statistical performance criteria as well as visual perception. AFLC-VQ exhibits much better performance than the above techniques. JPEG and EZW were chosen as comparative benchmarks since these have been used in radiographic image compression. The superior performance of AFLC-VQ over LBG-VQ has been reported in earlier papers.

  15. Multichannel adaptive signal detection in space-time colored compound-gaussian autoregressive processes

    NASA Astrophysics Data System (ADS)

    Xu, Qi; Ma, Xiaochuan; Yan, Shefeng; Hao, Chengpeng; Shi, Bo

    2012-12-01

    In this article, we consider the problem of adaptive detection for a multichannel signal in the presence of spatially and temporally colored compound-Gaussian disturbance. By modeling the disturbance as a multichannel autoregressive (AR) process, we first derive a parametric generalized likelihood ratio test against compound-Gaussian disturbance (CG-PGLRT) assuming that the true multichannel AR parameters are perfectly known. For the two-step GLRT design criterion, we combine the multichannel AR parameter estimation algorithm with three covariance matrix estimation strategies for compound-Gaussian environment, then obtain three adaptive CG-PGLRT detectors by replacing the ideal multichannel AR parameters with their estimates. Owing to treating the random texture components of disturbance as deterministic unknown parameters, all of the proposed detectors require no a priori knowledge about the disturbance statistics. The performance assessments are conducted by means of Monte Carlo trials. We focus on the issues of constant false alarm rate (CFAR) behavior, detection and false alarm probabilities. Numerical results show that the proposed adaptive CG-PGLRT detectors have dramatically ease the training and computational burden compared to the generalized likelihood ratio test-linear quadratic (GLRT-LQ) which is referred to as covariance matrix based detector and relies more heavily on training.

  16. An adaptive optics biomicroscope for mouse retinal imaging

    NASA Astrophysics Data System (ADS)

    Biss, David P.; Webb, Robert H.; Zhou, Yaopeng; Bifano, Thomas G.; Zamiri, Parisa; Lin, Charles P.

    2007-02-01

    In studying retinal disease on a microscopic level, in vivo imaging has allowed researchers to track disease progression in a single animal over time without sacrificing large numbers of animals for statistical studies. Historically, a drawback of in vivo retinal imaging, when compared to ex vivo imaging, is decreased image resolution due to aberrations present in the mouse eye. Adaptive optics has successfully corrected phase aberrations introduced the eye in ophthalmic imaging in humans. We are using adaptive optics to correct for aberrations introduced by the mouse eye in hopes of achieving cellular resolution retinal images of mice in vivo. In addition to using a wavefront sensor to drive the adaptive optic element, we explore the using image data to correct for wavefront aberrations introduced by the mouse eye. Image data, in the form of the confocal detection pinhole intensity are used as the feedback mechanism to control the MEMS deformable mirror in the adaptive optics system. Correction for wavefront sensing and sensor-less adaptive optics systems are presented.

  17. Adaptive SVD-Based Digital Image Watermarking

    NASA Astrophysics Data System (ADS)

    Shirvanian, Maliheh; Torkamani Azar, Farah

    Digital data utilization along with the increase popularity of the Internet has facilitated information sharing and distribution. However, such applications have also raised concern about copyright issues and unauthorized modification and distribution of digital data. Digital watermarking techniques which are proposed to solve these problems hide some information in digital media and extract it whenever needed to indicate the data owner. In this paper a new method of image watermarking based on singular value decomposition (SVD) of images is proposed which considers human visual system prior to embedding watermark by segmenting the original image into several blocks of different sizes, with more density in the edges of the image. In this way the original image quality is preserved in the watermarked image. Additional advantages of the proposed technique are large capacity of watermark embedding and robustness of the method against different types of image manipulation techniques.

  18. Coherent Image Layout using an Adaptive Visual Vocabulary

    SciTech Connect

    Dillard, Scott E.; Henry, Michael J.; Bohn, Shawn J.; Gosink, Luke J.

    2013-03-06

    When querying a huge image database containing millions of images, the result of the query may still contain many thousands of images that need to be presented to the user. We consider the problem of arranging such a large set of images into a visually coherent layout, one that places similar images next to each other. Image similarity is determined using a bag-of-features model, and the layout is constructed from a hierarchical clustering of the image set by mapping an in-order traversal of the hierarchy tree into a space-filling curve. This layout method provides strong locality guarantees so we are able to quantitatively evaluate performance using standard image retrieval benchmarks. Performance of the bag-of-features method is best when the vocabulary is learned on the image set being clustered. Because learning a large, discriminative vocabulary is a computationally demanding task, we present a novel method for efficiently adapting a generic visual vocabulary to a particular dataset. We evaluate our clustering and vocabulary adaptation methods on a variety of image datasets and show that adapting a generic vocabulary to a particular set of images improves performance on both hierarchical clustering and image retrieval tasks.

  19. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  20. Fresnel domain double-phase encoding encryption of color image via ptychography

    NASA Astrophysics Data System (ADS)

    Qiao, Liang; Wang, Yali; Li, Tuo; Shi, Yishi

    2015-10-01

    In this paper, color image encryption combined with ptychography has been investigated. Ptychographic imaging possesses a remarkable advantage of simple optics architecture and complex amplitude of object can be reconstructed just by a series of diffraction intensity patterns via aperture movement. Traditional technique of three primary color synthesis is applied for encrypting color image. In order to reduce physical limitations, the encryption's algorithm is based on Fresnel transformation domain. It is illustrated that the proposed optical encryption scheme has well ability to recover the encrypted color plaintext and advances in security enhancement thanks to introducing ptychography, since light probe as key factor enlarges the key space. Finally, the encryption's immunity to noise and reconstruction impact from lateral offset of probe has been investigated.

  1. Using rotation for steerable needle detection in 3D color-Doppler ultrasound images.

    PubMed

    Mignon, Paul; Poignet, Philippe; Troccaz, Jocelyne

    2015-08-01

    This paper demonstrates a new way to detect needles in 3D color-Doppler volumes of biological tissues. It uses rotation to generate vibrations of a needle using an existing robotic brachytherapy system. The results of our detection for color-Doppler and B-Mode ultrasound are compared to a needle location reference given by robot odometry and robot ultrasound calibration. Average errors between detection and reference are 5.8 mm on needle tip for B-Mode images and 2.17 mm for color-Doppler images. These results show that color-Doppler imaging leads to more robust needle detection in noisy environment with poor needle visibility or when needle interacts with other objects.

  2. A dual-modal retinal imaging system with adaptive optics

    PubMed Central

    Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

    2013-01-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

  3. Digital reconstructed radiography with multiple color image overlay for image-guided radiotherapy

    PubMed Central

    Yoshino, Shinichi; Miki, Kentaro; Sakata, Kozo; Nakayama, Yuko; Shibayama, Kouichi; Mori, Shinichiro

    2015-01-01

    Registration of patient anatomical structures to the reference position is a basic part of the patient set-up procedure. Registration of anatomical structures between the site of beam entrance on the patient surface and the distal target position is particularly important. Here, to improve patient positional accuracy during set-up for particle beam treatment, we propose a new visualization methodology using digitally reconstructed radiographs (DRRs), overlaid DRRs, and evaluation of overlaid DRR images in clinical cases. The overlaid method overlays two DRR images in different colors by dividing the CT image into two CT sections at the distal edge of the target along the treatment beam direction. Since our hospital uses fixed beam ports, the treatment beam angles for this study were set at 0 and 90 degrees. The DRR calculation direction was from the X-ray tube to the imaging device, and set to 180/270 degrees and 135/225 degrees, based on the installation of our X-ray imaging system. Original and overlaid DRRs were calculated using CT data for two patients, one with a parotid gland tumor and the other with prostate cancer. The original and overlaid DRR images were compared. Since the overlaid DRR image was completely separated into two regions when the DRR calculation angle was the same as the treatment beam angle, the overlaid DRR visualization technique was able to provide rich information for aiding recognition of the relationship between anatomical structures and the target position. This method will also be useful in patient set-up procedures for fixed irradiation ports. PMID:25678537

  4. Next generation high resolution adaptive optics fundus imager

    NASA Astrophysics Data System (ADS)

    Fournier, P.; Erry, G. R. G.; Otten, L. J.; Larichev, A.; Irochnikov, N.

    2005-12-01

    The spatial resolution of retinal images is limited by the presence of static and time-varying aberrations present within the eye. An updated High Resolution Adaptive Optics Fundus Imager (HRAOFI) has been built based on the development from the first prototype unit. This entirely new unit was designed and fabricated to increase opto-mechanical integration and ease-of-use through a new user interface. Improved camera systems for the Shack-Hartmann sensor and for the scene image were implemented to enhance the image quality and the frequency of the Adaptive Optics (AO) control loop. An optimized illumination system that uses specific wavelength bands was applied to increase the specificity of the images. Sample images of clinical trials of retinas, taken with and without the system, are shown. Data on the performance of this system will be presented, demonstrating the ability to calculate near diffraction-limited images.

  5. Spectrally Adaptable Compressive Sensing Imaging System

    DTIC Science & Technology

    2014-05-01

    2D coded projections. The underlying spectral 3D data cube is then recovered using compressed sensing (CS) reconstruction algorithms which assume...introduced in [?], is a remarkable imaging architecture that allows capturing spectral imaging information of a 3D cube with just a single 2D mea...allows capturing spectral imaging information of a 3D cube with just a single 2D measurement of the coded and spectrally dispersed source field

  6. Bayer patterned high dynamic range image reconstruction using adaptive weighting function

    NASA Astrophysics Data System (ADS)

    Kang, Hee; Lee, Suk Ho; Song, Ki Sun; Kang, Moon Gi

    2014-12-01

    It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

  7. FMRI-adaptation to highly-rendered color photographs of animals and manipulable artifacts during a classification task.

    PubMed

    Chouinard, Philippe A; Goodale, Melvyn A

    2012-02-01

    We used fMRI to identify brain areas that adapted to either animals or manipulable artifacts while participants classified highly-rendered color photographs into subcategories. Several key brain areas adapted more strongly to one class of objects compared to the other. Namely, we observed stronger adaptation for animals in the lingual gyrus bilaterally, which are known to analyze the color of objects, and in the right frontal operculum and in the anterior insular cortex bilaterally, which are known to process emotional content. In contrast, the left anterior intraparietal sulcus, which is important for configuring the hand to match the three-dimensional structure of objects during grasping, adapted more strongly to manipulable artifacts. Contrary to what a previous study has found using gray-scale photographs, we did not replicate categorical-specific adaptation in the lateral fusiform gyrus for animals and categorical-specific adaptation in the medial fusiform gyrus for manipulable artifacts. Both categories of objects adapted strongly in the fusiform gyrus without any clear preference in location along its medial-lateral axis. We think that this is because the fusiform gyrus has an important role to play in color processing and hence its responsiveness to color stimuli could be very different than its responsiveness to gray-scale photographs. Nevertheless, on the basis of what we found, we propose that the recognition and subsequent classification of animals may depend primarily on perceptual properties, such as their color, and on their emotional content whereas other factors, such as their function, may play a greater role for classifying manipulable artifacts.

  8. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  9. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-10-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  10. Towards Adaptive High-Resolution Images Retrieval Schemes

    NASA Astrophysics Data System (ADS)

    Kourgli, A.; Sebai, H.; Bouteldja, S.; Oukil, Y.

    2016-06-01

    Nowadays, content-based image-retrieval techniques constitute powerful tools for archiving and mining of large remote sensing image databases. High spatial resolution images are complex and differ widely in their content, even in the same category. All images are more or less textured and structured. During the last decade, different approaches for the retrieval of this type of images have been proposed. They differ mainly in the type of features extracted. As these features are supposed to efficiently represent the query image, they should be adapted to all kind of images contained in the database. However, if the image to recognize is somewhat or very structured, a shape feature will be somewhat or very effective. While if the image is composed of a single texture, a parameter reflecting the texture of the image will reveal more efficient. This yields to use adaptive schemes. For this purpose, we propose to investigate this idea to adapt the retrieval scheme to image nature. This is achieved by making some preliminary analysis so that indexing stage becomes supervised. First results obtained show that by this way, simple methods can give equal performances to those obtained using complex methods such as the ones based on the creation of bag of visual word using SIFT (Scale Invariant Feature Transform) descriptors and those based on multi scale features extraction using wavelets and steerable pyramids.

  11. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  12. Intra- and inter-rater reliability of digital image analysis for skin color measurement

    PubMed Central

    Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison

    2013-01-01

    Background We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Methods Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe® Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor® in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Conclusion Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. PMID:23551208

  13. Calibration View of Earth and the Moon by Mars Color Imager

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Three days after the Mars Reconnaissance Orbiter's Aug. 12, 2005, launch, the spacecraft was pointed toward Earth and the Mars Color Imager camera was powered up to acquire a suite of images of Earth and the Moon. When it gets to Mars, the Mars Color Imager's main objective will be to obtain daily global color and ultraviolet images of the planet to observe martian meteorology by documenting the occurrence of dust storms, clouds, and ozone. This camera will also observe how the martian surface changes over time, including changes in frost patterns and surface brightness caused by dust storms and dust devils.

    The purpose of acquiring an image of Earth and the Moon just three days after launch was to help the Mars Color Imager science team obtain a measure, in space, of the instrument's sensitivity, as well as to check that no contamination occurred on the camera during launch. Prior to launch, the team determined that, three days out from Earth, the planet would only be about 4.77 pixels across, and the Moon would be less than one pixel in size, as seen from the Mars Color Imager's wide-angle perspective. If the team waited any longer than three days to test the camera's performance in space, Earth would be too small to obtain meaningful results.

    The Earth and Moon images were acquired by turning Mars Reconnaissance Orbiter toward Earth, then slewing the spacecraft so that the Earth and Moon would pass before each of the five color and two ultraviolet filters of the Mars Color Imager. The distance to the Moon was about 1,440,000 kilometers (about 895,000 miles); the range to Earth was about 1,170,000 kilometers (about 727,000 miles).

    This view combines a sequence of frames showing the passage